🤖 AI Summary
Large language models (including multimodal variants) suffer from inefficient exploration and redundant reasoning paths in reasoning and reinforcement learning (RL), with random sampling failing to induce high-level strategic diversity. To address this, we propose an “implicit reasoning palette” mechanism based on variational autoencoders (VAEs): it maps question-answer pairs into semantically grounded latent variables, which are then decoded into learnable prefixes enabling explicit, interpretable, and on-demand control over reasoning style and structure. Our method integrates latent-variable conditioning, learnable token injection, supervised fine-tuning (SFT) warm-starting, and subsequent RL optimization. Evaluated across multiple reasoning benchmarks, it significantly outperforms standard RL baselines—enhancing exploration efficiency, enabling continual learning, and supporting visualizable intervention in reasoning trajectories and controllable generation. To our knowledge, this is the first work achieving explicit, latent-space-level control over reasoning strategies.
📝 Abstract
Exploration capacity shapes both inference-time performance and reinforcement learning (RL) training for large (vision-) language models, as stochastic sampling often yields redundant reasoning paths with little high-level diversity. This paper proposes Reasoning Palette, a novel latent-modulation framework that endows the model with a stochastic latent variable for strategic contextualization, guiding its internal planning prior to token generation. This latent context is inferred from the mean-pooled embedding of a question-answer pair via a variational autoencoder (VAE), where each sampled latent potentially encodes a distinct reasoning context. During inference, a sampled latent is decoded into learnable token prefixes and prepended to the input prompt, modulating the model's internal reasoning trajectory. In this way, the model performs internal sampling over reasoning strategies prior to output generation, which shapes the style and structure of the entire response sequence. A brief supervised fine-tuning (SFT) warm-up phase allows the model to adapt to this latent conditioning. Within RL optimization, Reasoning Palette facilitates structured exploration by enabling on-demand injection for diverse reasoning modes, significantly enhancing exploration efficiency and sustained learning capability. Experiments across multiple reasoning benchmarks demonstrate that our method enables interpretable and controllable control over the (vision-) language model's strategic behavior, thereby achieving consistent performance gains over standard RL methods.