A Model for Natural and Comprehensive Direction Giving

Yusuke Okuno, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

In Chapter ??, we introduced our field trials. In a shopping mall, we made the robot provide directions. We found such direction given by a robot useful. A robot has a number of appropriate features for direction giving; since it is physically co-located with people, it can proactively approach a person who needs such information, and then provide it “naturally” with its human-like body properties. While what was used in the field trial were simple directions, we are better prepared to understand now what good direction giving involves. What constitutes good direction giving from a robot? If the destination is within a visible distance, the answer might be intuitive. A robot would say “The shop is over there” and point. However, since the destination is often not visible, a robot needs to utter several sentences. Moreover, it would be expected to be accompanied with gestures. We designed our robot’s behavior to enable the listener to intuitively understand the information provided by the robot. This chapter illustrates how we integrate three important factors—utterances, gestures, and timing—so that the robot can conduct appropriate direction giving.

Original languageEnglish
Title of host publicationHuman-Robot Interaction in Social Robotics
PublisherCRC Press
Pages141-155
Number of pages15
ISBN (Electronic)9781466506985
ISBN (Print)9781138071698
Publication statusPublished - 2017 Jan 1
Externally publishedYes

ASJC Scopus subject areas

  • Computer Science(all)
  • Engineering(all)

Fingerprint

Dive into the research topics of 'A Model for Natural and Comprehensive Direction Giving'. Together they form a unique fingerprint.

Cite this