Target-Dependent UNITER: A Transformer-Based Multimodal Language Comprehension Model for Domestic Service Robots

Shintaro Ishikawa, Komei Sugiura

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)


Currently, domestic service robots have an insufficient ability to interact naturally through language. This is because understanding human instructions is complicated by various ambiguities. In existing methods, the referring expressions that specify the relationships between objects were insufficiently modeled. In this letter, we propose Target-dependent UNITER, which learns the relationship between the target object and other objects directly by focusing on the relevant regions within an image, rather than the whole image. Our method is an extension of the UNITER [1]-based Transformer that can be pretrained on general-purpose datasets. We extend the UNITER approach by introducing a new architecture for handling candidate objects. Our model is validated on two standard datasets, and the results show that Target-dependent UNITER outperforms the baseline method in terms of classification accuracy.

Original languageEnglish
Article number9525205
Pages (from-to)8401-8408
Number of pages8
JournalIEEE Robotics and Automation Letters
Issue number4
Publication statusPublished - 2021 Oct


  • Deep learning methods
  • deep learning for visual perception

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence


Dive into the research topics of 'Target-Dependent UNITER: A Transformer-Based Multimodal Language Comprehension Model for Domestic Service Robots'. Together they form a unique fingerprint.

Cite this