MeRF: Motivation-enhanced Reinforcement Finetuning for Large Reasoning Models

๐Ÿ“… 2025-06-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the lack of stable intrinsic motivation in large language models (LLMs) for complex reasoning tasks. We propose MeRF, a novel method thatโ€” for the first timeโ€”injects verifiable external rewards into the context via structured prompting, enabling LLMs to internalize reward objectives during zero-shot or few-shot inference and thereby synergistically optimizing in-context learning and reinforcement learning. Its core innovation lies in Reward-Specification Prompting, which transforms extrinsic rewards into intrinsic reasoning motivations, enhancing logical consistency and robustness against misleading information. On the Knights and Knaves logic puzzle benchmark, MeRF substantially outperforms supervised fine-tuning, standard RLHF, and diverse in-context learning baselines. Empirical results confirm the positive causal effect of alignment between motivational structure and reward specification on reasoning performance.

Technology Category

Application Category

๐Ÿ“ Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful learn-to-reason paradigm for Large Language Models (LLMs) to tackle complex reasoning tasks. However, existing RLVR methods overlook one of the most distinctive capabilities of LLMs, their in-context learning ability, as prominently demonstrated by the success of Chain-of-Thought (CoT) prompting. This motivates us to explore how reinforcement learning can be effectively combined with in-context learning to better improve the reasoning capabilities of LLMs. In this paper, we introduce Motivation-enhanced Reinforcement Finetuning} (MeRF), an intuitive yet effective method enhancing reinforcement learning of LLMs by involving ``telling LLMs the rules of the game''. Specifically, MeRF directly injects the reward specification into the prompt, which serves as an in-context motivation for model to improve its responses with awareness of the optimization objective. This simple modification leverages the in-context learning ability of LLMs aligning generation with optimization, thereby incentivizing the model to generate desired outputs from both inner motivation and external reward. Empirical evaluations on the Knights and Knaves~(K&K) logic puzzle reasoning benchmark demonstrate that exttt{MeRF} achieves substantial performance gains over baselines. Moreover, ablation studies show that performance improves with greater consistency between the in-context motivation and the external reward function, while the model also demonstrates an ability to adapt to misleading motivations through reinforcement learning.
Problem

Research questions and friction points this paper is trying to address.

Combining reinforcement learning with in-context learning for LLMs
Enhancing LLM reasoning via reward-aware prompt motivation
Improving logic puzzle solving with motivation-driven fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines reinforcement learning with in-context learning
Injects reward specification into prompts as motivation
Aligns generation with optimization via inner motivation
๐Ÿ”Ž Similar Papers
No similar papers found.