A Model for Natural and Comprehensive Direction Giving

Yusuke Okuno, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita

研究成果: Chapter

抄録

In Chapter ??, we introduced our field trials. In a shopping mall, we made the robot provide directions. We found such direction given by a robot useful. A robot has a number of appropriate features for direction giving; since it is physically co-located with people, it can proactively approach a person who needs such information, and then provide it “naturally” with its human-like body properties. While what was used in the field trial were simple directions, we are better prepared to understand now what good direction giving involves. What constitutes good direction giving from a robot? If the destination is within a visible distance, the answer might be intuitive. A robot would say “The shop is over there” and point. However, since the destination is often not visible, a robot needs to utter several sentences. Moreover, it would be expected to be accompanied with gestures. We designed our robot’s behavior to enable the listener to intuitively understand the information provided by the robot. This chapter illustrates how we integrate three important factors—utterances, gestures, and timing—so that the robot can conduct appropriate direction giving.

本文言語English
ホスト出版物のタイトルHuman-Robot Interaction in Social Robotics
出版社CRC Press
ページ141-155
ページ数15
ISBN(電子版)9781466506985
ISBN(印刷版)9781138071698
出版ステータスPublished - 2017 1月 1
外部発表はい

ASJC Scopus subject areas

  • コンピュータサイエンス一般
  • 工学一般

フィンガープリント

「A Model for Natural and Comprehensive Direction Giving」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル