TY - GEN
T1 - Preliminary Study of Object Recognition by Converting Physical Responses to Images in Two Dimensions
AU - Yane, Kazuki
AU - Nozaki, Takahiro
N1 - Funding Information:
ACKNOWLEDGMENT The part of this work was supported by JSPS KAKENHI Grant Numbers JP20H02135 and JP19KK0367. The part of this work was also supported by the Research Grant of Keio Leading-edge Laboratory of Science & Technology and the JST-Mirai Program Grant Number JPMJMI21B1, Japan.
Funding Information:
The part of this work was supported by JSPS KAKENHI Grant Numbers JP20H02135 and JP19KK0367. The part of this work was also supported by the Research Grant of Keio Leading-edge Laboratory of Science and Technology and the JST-Mirai Program Grant Number JPMJMI21B1, Japan
Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The use of robots is desired as a replacement for human labor. However, it is difficult for robots to respond flexibly to changes in objects and environments and perform tasks. Recently, many systems have been proposed that can flexibly respond to changes by generating robot motions using machine learning. Many machine learning methods use a camera to acquire environmental information, and feature extraction is performed using images acquired from the camera using CNN (Convolutional Neural Network), CAE (Convolutional Auto Encoder), or other methods. Many methods estimate the input values in the next step by inputting the image features, position data and reaction force data acquired from the robot together into the RNN (Recurrent Neural Network), etc. However, in most cases, the relationship between the image and robot data is learned without explicitly stating it. Therefore, in this paper, the data acquired from the robot is converted to images and used in combination with images from the camera to make the interaction between the robot and the environment explicit and to improve the estimation accuracy of NNs. In simulations, the proposed method was used to perform the task of discriminating the target of motion, and the high estimation accuracy was confirmed. In the future, we plan to use this method as input data for motion generation to generate motion according to the object.
AB - The use of robots is desired as a replacement for human labor. However, it is difficult for robots to respond flexibly to changes in objects and environments and perform tasks. Recently, many systems have been proposed that can flexibly respond to changes by generating robot motions using machine learning. Many machine learning methods use a camera to acquire environmental information, and feature extraction is performed using images acquired from the camera using CNN (Convolutional Neural Network), CAE (Convolutional Auto Encoder), or other methods. Many methods estimate the input values in the next step by inputting the image features, position data and reaction force data acquired from the robot together into the RNN (Recurrent Neural Network), etc. However, in most cases, the relationship between the image and robot data is learned without explicitly stating it. Therefore, in this paper, the data acquired from the robot is converted to images and used in combination with images from the camera to make the interaction between the robot and the environment explicit and to improve the estimation accuracy of NNs. In simulations, the proposed method was used to perform the task of discriminating the target of motion, and the high estimation accuracy was confirmed. In the future, we plan to use this method as input data for motion generation to generate motion according to the object.
KW - convolutional neural network
KW - hybrid control
KW - motion generation
KW - object recognition
UR - http://www.scopus.com/inward/record.url?scp=85158127021&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85158127021&partnerID=8YFLogxK
U2 - 10.1109/ICM54990.2023.10101938
DO - 10.1109/ICM54990.2023.10101938
M3 - Conference contribution
AN - SCOPUS:85158127021
T3 - Proceedings - 2023 IEEE International Conference on Mechatronics, ICM 2023
BT - Proceedings - 2023 IEEE International Conference on Mechatronics, ICM 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE International Conference on Mechatronics, ICM 2023
Y2 - 15 March 2023 through 17 March 2023
ER -