🤖 AI Summary
This work addresses the robust deployment of vision-language models (VLMs) in physical sensor environments. We propose the first test-time adaptation (TTA) paradigm that treats the camera’s exposure triad—ISO, shutter speed, and aperture—as learnable physical prompts. Our method requires no gradient updates or architectural modifications; instead, it achieves forward adaptation via multi-view physical acquisition, source-affinity filtering, low-entropy digital augmentation, and zero-temperature softmax hard voting. Crucially, we extend the prompt layer from digital tokens to the optical photon level, establishing a “moment-of-measurement” physical prompting framework. We further design a calibration-friendly, low-overhead selection-voting architecture. On the ImageNet-ES benchmark, under single automatic exposure, our approach improves accuracy by up to 25.6 percentage points over purely digital TTA, and by an additional 3.4 points over conventional sensor control combined with TTA—while maintaining strong robustness under strict latency constraints.
📝 Abstract
To extend the application of vision-language models (VLMs) from web images to sensor-mediated physical environments, we propose Multi-View Physical-prompt for Test-Time Adaptation (MVP), a forward-only framework that moves test-time adaptation (TTA) from tokens to photons by treating the camera exposure triangle--ISO, shutter speed, and aperture--as physical prompts. At inference, MVP acquires a library of physical views per scene, selects the top-k sensor settings using a source-affinity score, evaluates each retained view under lightweight digital augmentations, filters the lowest-entropy subset of augmented views, and aggregates predictions with Zero-temperature softmax (i.e., hard voting). This selection-then-vote design is simple, calibration-friendly, and requires no gradients or model modifications. On ImageNet-ES and ImageNet-ES-Diverse, MVP consistently outperforms digital-only TTA on single Auto-Exposure captures, by up to 25.6 percentage points (pp), and delivers up to 3.4 pp additional gains over pipelines that combine conventional sensor control with TTA. MVP remains effective under reduced parameter candidate sets that lower capture latency, demonstrating practicality. These results support the main claim that, beyond post-capture prompting, measurement-time control--selecting and combining real physical views--substantially improves robustness for VLMs.