Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference

Hitomi Yanaka, Koji Mineshima

研究成果: Conference contribution

4 被引用数 (Scopus)

抄録

Despite the success of multilingual pre-trained language models, it remains unclear to what extent these models have human-like generalization capacity across languages. The aim of this study is to investigate the out-of-distribution generalization of pre-trained language models through Natural Language Inference (NLI) in Japanese, the typological properties of which are different from those of English. We introduce a synthetically generated Japanese NLI dataset, called the Japanese Adversarial NLI (JaNLI) dataset, which is inspired by the English HANS dataset and is designed to require understanding of Japanese linguistic phenomena and illuminate the vulnerabilities of models. Through a series of experiments to evaluate the generalization performance of both Japanese and multilingual BERT models, we demonstrate that there is much room to improve current models trained on Japanese NLI tasks. Furthermore, a comparison of human performance and model performance on the different types of garden-path sentences in the JaNLI dataset shows that structural phenomena that ease interpretation of garden-path sentences for human readers do not help models in the same way, highlighting a difference between human readers and the models.

本文言語English
ホスト出版物のタイトルBlackboxNLP 2021 - Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
出版社Association for Computational Linguistics (ACL)
ページ337-349
ページ数13
ISBN(電子版)9781955917063
出版ステータスPublished - 2021
イベント4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP 2021 - Virtual, Punta Cana, Dominican Republic
継続期間: 2021 11月 11 → …

出版物シリーズ

名前BlackboxNLP 2021 - Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

Conference

Conference4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP 2021
国/地域Dominican Republic
CityVirtual, Punta Cana
Period21/11/11 → …

ASJC Scopus subject areas

  • 計算理論と計算数学
  • コンピュータ サイエンスの応用
  • 情報システム

フィンガープリント

「Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル