Solving natural language inference (NLI) with formal semantics and automated theorem proving has the merit of high precision and interpretability. However, they suffer from non-logical inference such as lexical inference. To overcome this weakness, we propose a human-in-the-loop mechanism in which the user provides non-logical knowledge to the theorem prover. We focus on the subgoal of a proof. A subgoal is a proposition that remains unproved by the theorem prover. Although the subgoal conveys what knowledge should be supplemented to the proof, it is a logical formula inscrutable to normal users. To make the subgoal understandable to the user, we propose a method to translate the subgoal into a natural language text. The resulting sentence is called a readable subgoal. The reaction to the readable subgoal is communicated to the theorem prover in the form of temporary axiom, a logical formula that conveys the content of the reaction. We experimented with our method by adding an interactive component to an existing NLI system, ccg2lambda. The annotators recruited by Amazon Mechanical Turk used the proposed system to solve SICK, a data set for NLI. Experiments show the interaction improved the recall rate of ccg2lambda from 32.1% to 44.1%, and the accuracy from 70.4% to 75.1%.
|出版ステータス||Published - 2021|
|イベント||35th Pacific Asia Conference on Language, Information and Computation, PACLIC 2021 - Shanghai, China|
継続期間: 2021 11月 5 → 2021 11月 7
|Conference||35th Pacific Asia Conference on Language, Information and Computation, PACLIC 2021|
|Period||21/11/5 → 21/11/7|
ASJC Scopus subject areas