TY - GEN
T1 - Grasping Point Estimation Based on Stored Motion and Depth Data in Motion Reproduction System
AU - Sun, Xiaobai
AU - Nozaki, Takahiro
AU - Murakami, Toshiyuki
AU - Ohnishi, Kouhei
N1 - Funding Information:
This work was supported in part by KEIRIN JKA(2017M-138).
Publisher Copyright:
© 2019 IEEE.
PY - 2019/5/24
Y1 - 2019/5/24
N2 - Most countries are running shortage of working force due to the aging population and reduction in the birthrate. Robot manipulators are expected to replace human work. However, it it still difficult for manipulators to do simple tasks such as fruit harvesting, foods cooking or toy assembling. A problem for robotic automation arise in the difficulty in teaching how much force manipulators should use for a task execution. Motion reproduction system, which uses bilateral control to store motion data, is one of a method to teach manipulators motion including position and force. The problem concerning motion reproduction system is that the motion reproducing fails if environment is changed between motion saving phase and motion reproducing phase. Motion reproduction system which can understand and adapt to environment is required. Vision sensor can sense environment. Computer vision is mainly focus on how to classify objects. Vision information is seldom combined with motion control especially force motion. Therefore, I propose a motion reproduction system in which reproduced motion is decided based on several motions and collected depth data. Convolutional Neural Network(CNN) was used to estimate a motion command from a depth image. Saved force data was used to generate labels for training. The label decision is different from conventional Machine learning alzorithm.
AB - Most countries are running shortage of working force due to the aging population and reduction in the birthrate. Robot manipulators are expected to replace human work. However, it it still difficult for manipulators to do simple tasks such as fruit harvesting, foods cooking or toy assembling. A problem for robotic automation arise in the difficulty in teaching how much force manipulators should use for a task execution. Motion reproduction system, which uses bilateral control to store motion data, is one of a method to teach manipulators motion including position and force. The problem concerning motion reproduction system is that the motion reproducing fails if environment is changed between motion saving phase and motion reproducing phase. Motion reproduction system which can understand and adapt to environment is required. Vision sensor can sense environment. Computer vision is mainly focus on how to classify objects. Vision information is seldom combined with motion control especially force motion. Therefore, I propose a motion reproduction system in which reproduced motion is decided based on several motions and collected depth data. Convolutional Neural Network(CNN) was used to estimate a motion command from a depth image. Saved force data was used to generate labels for training. The label decision is different from conventional Machine learning alzorithm.
KW - bilateral control
KW - image processing
KW - motion control
KW - motion reproduction
UR - http://www.scopus.com/inward/record.url?scp=85067110125&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85067110125&partnerID=8YFLogxK
U2 - 10.1109/ICMECH.2019.8722836
DO - 10.1109/ICMECH.2019.8722836
M3 - Conference contribution
AN - SCOPUS:85067110125
T3 - Proceedings - 2019 IEEE International Conference on Mechatronics, ICM 2019
SP - 471
EP - 476
BT - Proceedings - 2019 IEEE International Conference on Mechatronics, ICM 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE International Conference on Mechatronics, ICM 2019
Y2 - 18 March 2019 through 20 March 2019
ER -