TY - GEN
T1 - Extended Reproduction of Demonstration Motion Using Variational Autoencoder
AU - Takahashi, Daisuke
AU - Katsura, Seiichiro
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/10
Y1 - 2018/8/10
N2 - Learning from demonstration (LfD) is an effective method for robot motion learning because hand-coded cost function is not necessary. However, the number of times demonstrations can be performed is limited and performing a demonstration in every environmental condition is difficult. Therefore, an algorithm for generating a motion data not obtained by demonstrations is required. In order to deal with this problem, this research generates motion latent space by abstracting the demonstration data. Motion latent space is a space expressing the demonstration motion in lower dimensions. Also the demonstration data can be extended by decoding the points in the latent space. These things are realized by applying variational autoencoder (VAE) used in the field of image generation to time-series data. Demonstrations of the reaching task are conducted, and the paper shows that the manipulator can reach the object even when the object is located at a different position from demonstrations.
AB - Learning from demonstration (LfD) is an effective method for robot motion learning because hand-coded cost function is not necessary. However, the number of times demonstrations can be performed is limited and performing a demonstration in every environmental condition is difficult. Therefore, an algorithm for generating a motion data not obtained by demonstrations is required. In order to deal with this problem, this research generates motion latent space by abstracting the demonstration data. Motion latent space is a space expressing the demonstration motion in lower dimensions. Also the demonstration data can be extended by decoding the points in the latent space. These things are realized by applying variational autoencoder (VAE) used in the field of image generation to time-series data. Demonstrations of the reaching task are conducted, and the paper shows that the manipulator can reach the object even when the object is located at a different position from demonstrations.
UR - http://www.scopus.com/inward/record.url?scp=85052405604&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052405604&partnerID=8YFLogxK
U2 - 10.1109/ISIE.2018.8433683
DO - 10.1109/ISIE.2018.8433683
M3 - Conference contribution
AN - SCOPUS:85052405604
SN - 9781538637050
T3 - IEEE International Symposium on Industrial Electronics
SP - 1057
EP - 1062
BT - Proceedings - 2018 IEEE 27th International Symposium on Industrial Electronics, ISIE 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 27th IEEE International Symposium on Industrial Electronics, ISIE 2018
Y2 - 13 June 2018 through 15 June 2018
ER -