TY - GEN
T1 - Abductive Reasoning with Syllogistic Forms in Large Language Models
AU - Abe, Hirohiko
AU - Ando, Risako
AU - Morishita, Takanobu
AU - Ozeki, Kentaro
AU - Mineshima, Koji
AU - Okada, Mitsuhiro
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - Research in AI using Large-Language Models (LLMs) is rapidly evolving, and the comparison of their performance with human reasoning has become a key concern. Prior studies have indicated that LLMs and humans share similar biases, such as dismissing logically valid inferences that contradict common beliefs. However, criticizing LLMs for these biases might be unfair, considering our reasoning not only involves formal deduction but also abduction, which draws tentative conclusions from limited information. Abduction can be regarded as the inverse form of syllogism in its basic structure, that is, a process of drawing a minor premise from a major premise and conclusion. This paper explores the accuracy of LLMs in abductive reasoning by converting a syllogistic dataset into one suitable for abduction. It aims to investigate whether the state-of-the-art LLMs exhibit biases in abduction and to identify potential areas for improvement, emphasizing the importance of contextualized reasoning beyond formal deduction. This investigation is vital for advancing the understanding and application of LLMs in complex reasoning tasks, offering insights into bridging the gap between machine and human cognition.
AB - Research in AI using Large-Language Models (LLMs) is rapidly evolving, and the comparison of their performance with human reasoning has become a key concern. Prior studies have indicated that LLMs and humans share similar biases, such as dismissing logically valid inferences that contradict common beliefs. However, criticizing LLMs for these biases might be unfair, considering our reasoning not only involves formal deduction but also abduction, which draws tentative conclusions from limited information. Abduction can be regarded as the inverse form of syllogism in its basic structure, that is, a process of drawing a minor premise from a major premise and conclusion. This paper explores the accuracy of LLMs in abductive reasoning by converting a syllogistic dataset into one suitable for abduction. It aims to investigate whether the state-of-the-art LLMs exhibit biases in abduction and to identify potential areas for improvement, emphasizing the importance of contextualized reasoning beyond formal deduction. This investigation is vital for advancing the understanding and application of LLMs in complex reasoning tasks, offering insights into bridging the gap between machine and human cognition.
KW - Abduction
KW - Deduction
KW - Large Language Models
KW - Reasoning bias
KW - Syllogism
UR - https://www.scopus.com/pages/publications/105002035920
UR - https://www.scopus.com/pages/publications/105002035920#tab=citedBy
U2 - 10.1007/978-3-031-84595-6_1
DO - 10.1007/978-3-031-84595-6_1
M3 - Conference contribution
AN - SCOPUS:105002035920
SN - 9783031845949
T3 - Lecture Notes in Computer Science
SP - 3
EP - 17
BT - Human and Artificial Rationalities. Advances in Cognition, Computation, and Consciousness - 3rd International Conference, HAR 2024, Proceedings
A2 - Baratgin, Jean
A2 - Jacquet, Baptiste
A2 - Brochier, Emmanuel
A2 - Yama, Hiroshi
PB - Springer Science and Business Media Deutschland GmbH
T2 - 3rd International Conference on Human and Artificial Rationalities, HAR 2024
Y2 - 17 September 2024 through 20 September 2024
ER -