TY - JOUR
T1 - Explaining Intelligent Agent's Future Motion on Basis of Vocabulary Learning With Human Goal Inference
AU - Fukuchi, Yosuke
AU - Osawa, Masahiko
AU - Yamakawa, Hiroshi
AU - Imai, Michita
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2022
Y1 - 2022
N2 - Intelligent agents (IAs) that use machine learning for decision-making often lack the explainability about what they are going to do, which makes human-IA collaboration challenging. However, previous methods of explaining IA behavior require IA developers to predefine vocabulary that expresses motion, which is problematic as IA decision-making becomes complex. This paper proposes Manifestor, a method for explaining an IA's future motion with autonomous vocabulary learning. With Manifestor, an IA can learn vocabulary from a person's instructions about how the IA should act. A notable contribution of this paper is that we formalized the communication gap between a person and IA in the vocabulary-learning phase, that is, the IA's goal may be different from what the person wants the IA to achieve, and the IA needs to infer the latter to judge whether a motion matches that person's instruction. We evaluated Manifestor by investigating whether people can accurately predict an IA's future motion with explanations generated with Manifestor. We compared Manifestor's vocabulary with that from optimal acquired in a situation in which the communication-gap problem did not exist and that from ablation, which was learned with a false assumption that an IA and person shared a goal. The experimental results revealed that vocabulary learned with Manifestor improved people's prediction accuracy as much as with optimal, while ablation failed, suggesting that Manifestor can enable an IA to properly learn vocabulary from people's instructions even if a communication gap exists.
AB - Intelligent agents (IAs) that use machine learning for decision-making often lack the explainability about what they are going to do, which makes human-IA collaboration challenging. However, previous methods of explaining IA behavior require IA developers to predefine vocabulary that expresses motion, which is problematic as IA decision-making becomes complex. This paper proposes Manifestor, a method for explaining an IA's future motion with autonomous vocabulary learning. With Manifestor, an IA can learn vocabulary from a person's instructions about how the IA should act. A notable contribution of this paper is that we formalized the communication gap between a person and IA in the vocabulary-learning phase, that is, the IA's goal may be different from what the person wants the IA to achieve, and the IA needs to infer the latter to judge whether a motion matches that person's instruction. We evaluated Manifestor by investigating whether people can accurately predict an IA's future motion with explanations generated with Manifestor. We compared Manifestor's vocabulary with that from optimal acquired in a situation in which the communication-gap problem did not exist and that from ablation, which was learned with a false assumption that an IA and person shared a goal. The experimental results revealed that vocabulary learned with Manifestor improved people's prediction accuracy as much as with optimal, while ablation failed, suggesting that Manifestor can enable an IA to properly learn vocabulary from people's instructions even if a communication gap exists.
KW - Explainable AI
KW - deep reinforcement learning
KW - human-agent interaction
KW - intelligent agent
UR - http://www.scopus.com/inward/record.url?scp=85130489505&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85130489505&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2022.3176104
DO - 10.1109/ACCESS.2022.3176104
M3 - Article
AN - SCOPUS:85130489505
SN - 2169-3536
VL - 10
SP - 54336
EP - 54347
JO - IEEE Access
JF - IEEE Access
ER -