This study proposes an image-based action generation method using a controller that is trained with experiences autonomously collected by a robot. Operating a robot using its state or point in coordinate space is not always easy for human operators to control. This is because humans do not interpret the world as a coordinate space. Depending on the task, inputs such as voice-input or image-input are easier for human operators to use when managing a robot. Accordingly, we studied an image-based control method of robotic agents. Designing an image-based controller by hand for different tasks and input images is highly complex. Therefore, a controller which can be automatically trained from experiences collected by a robot is strongly preferred. In addition, when considering the operation of a robot in a real environment, the controller should guarantee the safety of a robot’s behavior in a way that is configurable and understandable by the human operator. However, most previous approaches trained the controller end-to-end, which does not guarantee the safety of the behavior learned by the controller. Instead of training the controller end-to-end, we train state prediction and cost estimation functions to solve the action generation as a path planning problem. By doing so, we can explicitly design and specify the undesired state of a robot in the configuration space. The results show that the proposed method can be used to provide safe navigation to different goal positions in a realistic living room-like environment with an hour of training data.
|Journal of Intelligent and Robotic Systems: Theory and Applications
|Published - 2021 9月
ASJC Scopus subject areas