Jump to Content

Large Language Models as Analogical Reasoners

Published
View publication

Abstract

Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks, but typically demands labeled exemplars of the reasoning process. In this work, we introduce a new prompting approach, analogical reasoning, designed to automatically guide the reasoning process of large language models. Drawing inspiration from how humans recall relevant past experiences when addressing new challenges, our approach makes language models self-generate relevant exemplars or lessons in the context, before proceeding to solve the given problem. This method presents several advantages: it obviates the need for manual annotation or external data retrieval, offering generality and convenience, while also tailoring generated exemplars to each unique problem, offering adaptability. Experimental results show that our approach outperforms 0-shot CoT and few-shot CoT in a variety of reasoning tasks, including math problem solving in GSM8K and MATH, code generation in Codeforces, and other reasoning tasks in BIG-Bench, with an average accuracy gain of +5%.

Authors

Michihiro Yasunaga*, Xinyun Chen, Yujia Li, Ice Pasupat, Jure Leskovec*, Percy Liang*, Ed Chi, Denny Zhou

*
External author

Venue

arXiv