Taking Shortcuts for Categorical VQA Using Super Neurons

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of visual-language models in classification-based visual question answering by proposing a training-free acceleration method that requires neither fine-tuning nor low-rank adaptation. The key insight is the discovery that specific scalar activations—dubbed “Super Neurons”—in the model’s shallow layers are sufficient to support high-accuracy classification. Leveraging this observation, the authors design an efficient probing classifier that triggers extremely early exit at the first generated token. This approach breaks away from conventional paradigms reliant on attention mechanisms or deep-layer reasoning, achieving up to 5.10× inference speedup while maintaining robust performance.

Technology Category

Application Category

📝 Abstract
Sparse Attention Vectors (SAVs) have emerged as an excellent training-free alternative to supervised finetuning or low-rank adaptation to improve the performance of Vision Language Models (VLMs). At their heart, SAVs select a few accurate attention heads for a task of interest and use them as classifiers, rather than relying on the model's prediction. In a similar spirit, we find that directly probing the raw activations of the VLM, in the form of scalar values, is sufficient to yield accurate classifiers on diverse visually grounded downstream tasks. Shifting focus from attention vectors to scalar activations dramatically increases the search space for accurate parameters, allowing us to find more discriminative neurons immediately from the first generated token. We call such activations Super Neurons (SNs). In this probing setting, we discover that enough SNs appear in the shallower layers of the large language model to allow for extreme early exiting from the first layer of the model at the first generated token. Compared to the original network, SNs robustly improve the classification performance while achieving a speedup of up to 5.10x.
Problem

Research questions and friction points this paper is trying to address.

Vision Language Models
Categorical VQA
Early Exiting
Model Efficiency
Super Neurons
Innovation

Methods, ideas, or system contributions that make the work stand out.

Super Neurons
Sparse Activation Probing
Early Exiting
Vision Language Models
Training-Free Adaptation
🔎 Similar Papers