🤖 AI Summary
Current large language models rely on high-quality training data for behavioral alignment, which is costly and exhibits poor generalization, while lacking general-purpose control mechanisms at test time. This work proposes a test-time behavior enhancement method that constructs pairs of positively and negatively polarized prompts to contrastively steer model outputs—either the probability distributions of large language models (LLMs) or visual attention maps of vision-language models (VLMs)—toward responses better aligned with human preferences, without requiring any additional training. It is the first approach to extend contrastive decoding to multimodal settings and diverse behavioral objectives, achieving zero-training, cross-modal alignment. Experiments demonstrate that the method significantly improves behavioral performance on LLMs in 3H alignment tasks and enhances behavior-consistent visual grounding in VLM-based visual question answering, enabling efficient, low-cost, and reliable behavioral control.
📝 Abstract
Reliable AI systems require large language models (LLMs) to exhibit behaviors aligned with human preferences and values. However, most existing alignment approaches operate at training time and rely on additional high-quality data, incurring significant computational and annotation costs. While recent work has shown that contrastive decoding can leverage a model's internal distributions to improve specific capabilities, its applicability remains limited to narrow behavioral scopes and scenarios. In this work, we introduce Polarity-Prompt Contrastive Decoding (PromptCD), a test-time behavior control method that generalizes contrastive decoding to broader enhancement settings. PromptCD constructs paired positive and negative guiding prompts for a target behavior and contrasts model responses-specifically token-level probability distributions in LLMs and visual attention patterns in VLMs-to reinforce desirable outcomes. This formulation extends contrastive decoding to a wide range of enhancement objectives and is applicable to both LLMs and Vision-Language Models (VLMs) without additional training. For LLMs, experiments on the"3H"alignment objectives (helpfulness, honesty, and harmlessness) demonstrate consistent and substantial improvements, indicating that post-trained models can achieve meaningful self-enhancement purely at test time. For VLMs, we further analyze contrastive effects on visual attention, showing that PromptCD significantly improves VQA performance by reinforcing behavior-consistent visual grounding. Collectively, these results highlight PromptCD as a simple, general, and cost-efficient strategy for reliable behavior control across modalities.