Reasoning Paths Optimization: Learning to Reason and Explore From Diverse Paths

1DAMO Academy, Alibaba Group, 2Nanyang Technological University, 3Singapore University of Technology and Design

RPO is a training framework for exploring and learning over diverse reasoning paths.

Abstract

Advanced models such as OpenAI o1 exhibit impressive problem-solving capabilities through step-by-step reasoning. However, they may still falter on more complex problems, making errors that disrupt their reasoning paths. We attribute this to the expansive solution space, where each step has the risk of diverging into mistakes.

To enhance language model reasoning, we introduce a specialized training framework called Reasoning Paths Optimization (RPO), which enables learning to reason and explore from diverse paths. Our approach encourages favorable branches at each reasoning step while penalizing unfavorable ones, enhancing the model's overall problem-solving performance. Reasoning Paths Optimization does not rely on large-scale human-annotated rationales or outputs from closed-source models, making it scalable and data-efficient. We focus on multi-step reasoning tasks, such as math word problems and science-based exam questions.

Experiments demonstrate that our framework significantly enhances the reasoning performance of large language models, with up to 3.1% and 4.3% improvement on GSM8K and MMLU (STEM) respectively.

Results

RPO Results

BibTeX

@article{chia2024rpo,
  author    = {Yew Ken Chia and Guizhen Chen and Weiwen Xu and Luu Anh Tuan and Soujanya Poria and Lidong Bing},
  title     = {Reasoning Paths Optimization: Learning to Reason and Explore From Diverse Paths},
  journal   = {EMNLP},
  year      = {2024},
}