A multimodal classifier generative adversarial network for carry and place tasks from ambiguous language instructions

Aly Magassouba, Komei Sugiura, Hisashi Kawai

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

This letter focuses on a multimodal language understanding method for carry-and-place tasks with domestic service robots. We address the case of ambiguous instructions, that is, when the target area is not specified. For instance 'put away the milk and cereal' is a natural instruction where there is ambiguity regarding the target area, considering environments in daily life. Conventionally, this instruction can be disambiguated from a dialogue system, but at the cost of time and cumbersome interaction. Instead, we propose a multimodal approach, in which the instructions are disambiguated using the robot's state and environment context. We develop the Multi-Modal Classifier Generative Adversarial Network (MMC-GAN) to predict the likelihood of different target areas considering the robot's physical limitation and the target clutter. Our approach, MMC-GAN, significantly improves accuracy compared with baseline methods that use instructions only or simple deep neural networks.

Original languageEnglish
Pages (from-to)3113-3120
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume3
Issue number4
DOIs
Publication statusPublished - 2018 Oct
Externally publishedYes

Keywords

  • Deep learning in robotics and automation
  • domestic robots
  • robot audition

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A multimodal classifier generative adversarial network for carry and place tasks from ambiguous language instructions'. Together they form a unique fingerprint.

Cite this