π€ AI Summary
This study addresses the challenge of enhancing chain-of-thought (CoT) reasoning in large audio language models (LALMs) without requiring additional training. The authors propose a training-free, inference-time model guidance approach that synthesizes diverse information sources to construct guiding vectors and leverages textual exemplars to steer spoken reasoning, thereby enabling cross-modal knowledge transfer. By integrating hidden-state perturbations with CoT prompting, the method achieves an average accuracy improvement of 4.4% across four widely used LALMs and four benchmark tasks, significantly outperforming baseline approaches. These results demonstrate the methodβs strong generalizability, robustness, and effectiveness in cross-modal guidance for audio-language reasoning.
π Abstract
Chain-of-thought (CoT) prompting has been extended to large audio-language models (LALMs) to elicit reasoning, yet enhancing its effectiveness without training remains challenging. We study inference-time model steering as a training-free approach to improve LALM reasoning. We introduce three strategies using diverse information sources and evaluate them across four LALMs and four benchmarks. Results show general accuracy gains up to 4.4% over CoT prompting. Notably, we identify a cross-modal transfer where steering vectors derived from few text samples effectively guide speech-based reasoning, demonstrating high data efficiency. We also examine hyperparameter sensitivity to understand the robustness of these approaches. Our findings position model steering as a practical direction for strengthening LALM reasoning.