TY - JOUR
T1 - Walking partner robot chatting about scenery
AU - Sono, Taichi
AU - Satake, Satoru
AU - Kanda, Takayuki
AU - Imai, Michita
N1 - Funding Information:
This work was in part supported by Tateishi Science and Technology Foundation [grant number 2158001], in part supported by JST, CREST [grant number JPMJCR17A2], and in part supported by JSPS KAKENHI [grant number JP18H04121].
Publisher Copyright:
© 2019, © 2019 Informa UK Limited, trading as Taylor & Francis Group and The Robotics Society of Japan.
PY - 2019
Y1 - 2019
N2 - We believe that many future scenarios will exist where a partner robot will talk with people on walks. To improve the user experience, we aim to endow robots with the capability to select an appropriate conversation topics by allowing them to start chatting about a topic that matches the current scenery and to extend it based on the user's interest and involvement in it. We implemented a function to compute the similarities between utterances and scenery by comparing their topic vectors. First, we convert the scenery into a list of words by leveraging Google Cloud Vision library [Google Cloud Vision Api [Online]. Available from: https://cloud.google.com/vision/]. We form a topic vector space with the Latent Dirichlet Allocation method and transform a list of words into a topic vector. Our system uses this function to choose (from an utterance database) the utterance that best matches the current scenery. Then it estimates the user's level of involvement in the chat with a simple rule based on the length of their responses. If the user is actively involved in the chat topic, the robot continues the current topic using pre-defined derivative utterances. If the user's involvement sags, the robot selects a new topic based on the current scenery. The topic selection that is based on the current scenery was proposed in our previous work [Totsuka R, Satake S, Kanda T, et al. Is a robot a better walking partner if it associates utterances with visual scenes? ACM/IEEE Int. Conf. on Human–Robot Interaction (HRI2017), Aula der Wissenschaft, Vienna, Austria; 2017. p. 313–322]. Our main contribution is the whole chat system, which includes the user's involvement estimation. We implemented our system with a shoulder-mounted robot and conducted a user study to evaluate its effectiveness. Our experimental results show that users evaluated the robot with the proposed system as a better walking partner than one that randomly chose utterances.
AB - We believe that many future scenarios will exist where a partner robot will talk with people on walks. To improve the user experience, we aim to endow robots with the capability to select an appropriate conversation topics by allowing them to start chatting about a topic that matches the current scenery and to extend it based on the user's interest and involvement in it. We implemented a function to compute the similarities between utterances and scenery by comparing their topic vectors. First, we convert the scenery into a list of words by leveraging Google Cloud Vision library [Google Cloud Vision Api [Online]. Available from: https://cloud.google.com/vision/]. We form a topic vector space with the Latent Dirichlet Allocation method and transform a list of words into a topic vector. Our system uses this function to choose (from an utterance database) the utterance that best matches the current scenery. Then it estimates the user's level of involvement in the chat with a simple rule based on the length of their responses. If the user is actively involved in the chat topic, the robot continues the current topic using pre-defined derivative utterances. If the user's involvement sags, the robot selects a new topic based on the current scenery. The topic selection that is based on the current scenery was proposed in our previous work [Totsuka R, Satake S, Kanda T, et al. Is a robot a better walking partner if it associates utterances with visual scenes? ACM/IEEE Int. Conf. on Human–Robot Interaction (HRI2017), Aula der Wissenschaft, Vienna, Austria; 2017. p. 313–322]. Our main contribution is the whole chat system, which includes the user's involvement estimation. We implemented our system with a shoulder-mounted robot and conducted a user study to evaluate its effectiveness. Our experimental results show that users evaluated the robot with the proposed system as a better walking partner than one that randomly chose utterances.
KW - Walking partner robot
KW - association of utterance and visual scene
KW - chatting while walking
KW - topic selection
UR - http://www.scopus.com/inward/record.url?scp=85065199674&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85065199674&partnerID=8YFLogxK
U2 - 10.1080/01691864.2019.1610062
DO - 10.1080/01691864.2019.1610062
M3 - Article
AN - SCOPUS:85065199674
SN - 0169-1864
VL - 33
SP - 742
EP - 755
JO - Advanced Robotics
JF - Advanced Robotics
IS - 15-16
ER -