TY - JOUR
T1 - A Multimodal Path Planning Approach to Human Robot Interaction Based on Integrating Action Modeling
AU - Kawasaki, Yosuke
AU - Yorozu, Ayanori
AU - Takahashi, Masaki
AU - Pagello, Enrico
N1 - Funding Information:
This study was supported by “A Framework PRINTEPS to Develop Practical Artificial Intelligence” of the Core Research for Evolutional Science and Technology (CREST) of the Japan Science and Technology Agency (JST) under Grant Number JPMJCR14E3.
Publisher Copyright:
© 2020, Springer Nature B.V.
PY - 2020/12
Y1 - 2020/12
N2 - To complete a task consisting of a series of actions that involve human-robot interaction, it is necessary to plan a motion that considers each action individually as well as in relation to the following action. We then focus on the specific action of “approaching a group of people” in order to accurately obtain human data that is used to make the performance of tasks involving interactions with multiple people more smooth. The movement depends on the characteristics of the important sensors used for the task and on the placement of people at and around the destination. Considering the multiple tasks and placement of people, the pre-calculation of the destinations and paths is difficult. This paper thus presents a system of navigation that can accurately obtain human data based on sensor characteristics, task content, and real-time sensor data for processes involving human-robot interaction (HRI); this method does not navigate specifically toward a previously determined static point. Our goal was achieved by using a multimodal path planning based on integration of action modeling by considering both voice and image sensing of interacting people as well as obstacle avoidance. We experimentally verified our method by using a robot in a coffee shop environment.
AB - To complete a task consisting of a series of actions that involve human-robot interaction, it is necessary to plan a motion that considers each action individually as well as in relation to the following action. We then focus on the specific action of “approaching a group of people” in order to accurately obtain human data that is used to make the performance of tasks involving interactions with multiple people more smooth. The movement depends on the characteristics of the important sensors used for the task and on the placement of people at and around the destination. Considering the multiple tasks and placement of people, the pre-calculation of the destinations and paths is difficult. This paper thus presents a system of navigation that can accurately obtain human data based on sensor characteristics, task content, and real-time sensor data for processes involving human-robot interaction (HRI); this method does not navigate specifically toward a previously determined static point. Our goal was achieved by using a multimodal path planning based on integration of action modeling by considering both voice and image sensing of interacting people as well as obstacle avoidance. We experimentally verified our method by using a robot in a coffee shop environment.
KW - Action modeling
KW - Human-robot interaction
KW - Multimodal path planning
KW - Robot navigation
UR - http://www.scopus.com/inward/record.url?scp=85090143317&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090143317&partnerID=8YFLogxK
U2 - 10.1007/s10846-020-01244-7
DO - 10.1007/s10846-020-01244-7
M3 - Article
AN - SCOPUS:85090143317
SN - 0921-0296
VL - 100
SP - 955
EP - 972
JO - Journal of Intelligent and Robotic Systems: Theory and Applications
JF - Journal of Intelligent and Robotic Systems: Theory and Applications
IS - 3-4
ER -