Abductive Reasoning with Syllogistic Forms in Large Language Models

  • Hirohiko Abe
  • , Risako Ando
  • , Takanobu Morishita
  • , Kentaro Ozeki
  • , Koji Mineshima
  • , Mitsuhiro Okada

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Research in AI using Large-Language Models (LLMs) is rapidly evolving, and the comparison of their performance with human reasoning has become a key concern. Prior studies have indicated that LLMs and humans share similar biases, such as dismissing logically valid inferences that contradict common beliefs. However, criticizing LLMs for these biases might be unfair, considering our reasoning not only involves formal deduction but also abduction, which draws tentative conclusions from limited information. Abduction can be regarded as the inverse form of syllogism in its basic structure, that is, a process of drawing a minor premise from a major premise and conclusion. This paper explores the accuracy of LLMs in abductive reasoning by converting a syllogistic dataset into one suitable for abduction. It aims to investigate whether the state-of-the-art LLMs exhibit biases in abduction and to identify potential areas for improvement, emphasizing the importance of contextualized reasoning beyond formal deduction. This investigation is vital for advancing the understanding and application of LLMs in complex reasoning tasks, offering insights into bridging the gap between machine and human cognition.

Original languageEnglish
Title of host publicationHuman and Artificial Rationalities. Advances in Cognition, Computation, and Consciousness - 3rd International Conference, HAR 2024, Proceedings
EditorsJean Baratgin, Baptiste Jacquet, Emmanuel Brochier, Hiroshi Yama
PublisherSpringer Science and Business Media Deutschland GmbH
Pages3-17
Number of pages15
ISBN (Print)9783031845949
DOIs
Publication statusPublished - 2025
Event3rd International Conference on Human and Artificial Rationalities, HAR 2024 - Paris, France
Duration: 2024 Sept 172024 Sept 20

Publication series

NameLecture Notes in Computer Science
Volume15504 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference3rd International Conference on Human and Artificial Rationalities, HAR 2024
Country/TerritoryFrance
CityParis
Period24/9/1724/9/20

Keywords

  • Abduction
  • Deduction
  • Large Language Models
  • Reasoning bias
  • Syllogism

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Abductive Reasoning with Syllogistic Forms in Large Language Models'. Together they form a unique fingerprint.

Cite this