Detached Skip-Links and $R$-Probe: Decoupling Feature Aggregation from Gradient Propagation for MLLM OCR

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance limitations of multimodal large language models (MLLMs) in OCR tasks, which often arise from interference by high-level semantic gradients that degrade fine-grained visual information during deep feature fusion. To mitigate this issue, the authors propose Detached Skip-Links, a mechanism that decouples forward feature reuse from backward gradient propagation, thereby preserving low-level visual features without introducing additional parameters. Furthermore, they introduce the R-Probe evaluation framework, which quantifies pixel-level reconstructability of visual tokens by integrating asymmetric skip connections, gradient truncation, and a shallow decoder initialized from early LLM layers. Experiments demonstrate consistent and significant improvements in OCR performance across diverse Vision Transformer backbones and multimodal benchmarks, with additional gains observed on general-purpose tasks, validated on a dataset comprising 7 million samples.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) excel at high-level reasoning yet fail on OCR tasks where fine-grained visual details are compromised or misaligned. We identify an overlooked optimization issue in multi-layer feature fusion. Skip pathways introduce direct back-propagation paths from high-level semantic objectives to early visual layers. This mechanism overwrites low-level signals and destabilizes training. To mitigate this gradient interference, we propose Detached Skip-Links, a minimal modification that reuses shallow features in the forward pass while stopping gradients through the skip branch during joint training. This asymmetric design reduces gradient interference, improving stability and convergence without adding learnable parameters. To diagnose whether fine-grained information is preserved and usable by an LLM, we introduce $R$-Probe, which measures pixel-level reconstructability of projected visual tokens using a shallow decoder initialized from the first quarter of the LLM layers. Across multiple ViT backbones and multimodal benchmarks, and at scales up to 7M training samples, our approach consistently improves OCR-centric benchmarks and delivers clear gains on general multimodal tasks.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
OCR
Gradient Interference
Feature Fusion
Skip Connections
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detached Skip-Links
R-Probe
gradient decoupling
multimodal LLM
OCR