🤖 AI Summary
Existing LLM alignment methods (e.g., RLHF) rely on costly expert feedback, limiting scalability and lacking fine-grained, inference-time control. This paper proposes W2S-AlignTree—the first inference-time alignment framework integrating weak-to-strong generalization with Monte Carlo Tree Search (MCTS), requiring no parameter updates and leveraging weak-model signals to dynamically guide strong-model generation paths. Its key innovation is an entropy-aware exploration mechanism that balances exploration and exploitation during tree search while modeling heuristic-optimal paths in high-dimensional latent spaces. Evaluated on sentiment generation, summarization, and instruction following, W2S-AlignTree significantly outperforms strong baselines: for example, Llama3-8B’s summarization score improves from 1.89 to 2.19 (+15.9%). Results demonstrate superior efficiency, scalability, and inference-time controllability without retraining.
📝 Abstract
Large Language Models (LLMs) demonstrate impressive capabilities, yet their outputs often suffer from misalignment with human preferences due to the inadequacy of weak supervision and a lack of fine-grained control. Training-time alignment methods like Reinforcement Learning from Human Feedback (RLHF) face prohibitive costs in expert supervision and inherent scalability limitations, offering limited dynamic control during inference. Consequently, there is an urgent need for scalable and adaptable alignment mechanisms. To address this, we propose W2S-AlignTree, a pioneering plug-and-play inference-time alignment framework that synergistically combines Monte Carlo Tree Search (MCTS) with the Weak-to-Strong Generalization paradigm for the first time. W2S-AlignTree formulates LLM alignment as an optimal heuristic search problem within a generative search tree. By leveraging weak model's real-time, step-level signals as alignment proxies and introducing an Entropy-Aware exploration mechanism, W2S-AlignTree enables fine-grained guidance during strong model's generation without modifying its parameters. The approach dynamically balances exploration and exploitation in high-dimensional generation search trees. Experiments across controlled sentiment generation, summarization, and instruction-following show that W2S-AlignTree consistently outperforms strong baselines. Notably, W2S-AlignTree raises the performance of Llama3-8B from 1.89 to 2.19, a relative improvement of 15.9 on the summarization task.