ReLope: KL-Regularized LoRA Probes for Multimodal LLM Routing

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degradation of routing performance in multimodal large language models, where visual inputs reduce the separability of correctness signals in hidden states. To mitigate this issue, the authors propose ReLope, a method that leverages attention mechanisms to aggregate hidden states and incorporates a lightweight LoRA adapter regularized by KL divergence to enhance the learning of routing-aware representations. Experimental results demonstrate that ReLope significantly outperforms existing baselines on multimodal routing tasks, underscoring the critical role of improving hidden state quality for effective routing.

Technology Category

Application Category

📝 Abstract
Routing has emerged as a promising strategy for balancing performance and cost in large language model (LLM) systems that combine lightweight models with powerful but expensive large models. Recent studies show that \emph{probe routing}, which predicts the correctness of a small model using its hidden states, provides an effective solution in text-only LLMs. However, we observe that these probes degrade substantially when applied to multimodal LLMs (MLLMs). Through empirical analysis, we find that the presence of visual inputs weakens the separability of correctness signals in hidden states, making them harder to extract using standard probe designs. To address this challenge, we introduce two complementary approaches for improving probe routing in MLLMs. First, we propose the \emph{Attention Probe}, which aggregates hidden states from the preceding layer based on attention scores to recover distributed correctness signals. Second, we present the \emph{KL-Regularized LoRA Probe (ReLope)}, which inserts a lightweight LoRA adapter and applies a KL regularizer to learn routing-aware representations. Comprehensive experiments show that our methods consistently outperform baselines, suggesting that improving the quality of hidden states is key to effective routing in MLLMs. Our code is available at https://github.com/Spinozaaa/ReLope.
Problem

Research questions and friction points this paper is trying to address.

multimodal LLMs
probe routing
correctness signals
hidden states
visual inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

probe routing
multimodal LLMs
LoRA
KL regularization
attention mechanism
🔎 Similar Papers
No similar papers found.