From Tokens to Photons: Test-Time Physical Prompting for Vison-Language Models

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the robust deployment of vision-language models (VLMs) in physical sensor environments. We propose the first test-time adaptation (TTA) paradigm that treats the camera’s exposure triad—ISO, shutter speed, and aperture—as learnable physical prompts. Our method requires no gradient updates or architectural modifications; instead, it achieves forward adaptation via multi-view physical acquisition, source-affinity filtering, low-entropy digital augmentation, and zero-temperature softmax hard voting. Crucially, we extend the prompt layer from digital tokens to the optical photon level, establishing a “moment-of-measurement” physical prompting framework. We further design a calibration-friendly, low-overhead selection-voting architecture. On the ImageNet-ES benchmark, under single automatic exposure, our approach improves accuracy by up to 25.6 percentage points over purely digital TTA, and by an additional 3.4 points over conventional sensor control combined with TTA—while maintaining strong robustness under strict latency constraints.

Technology Category

Application Category

📝 Abstract
To extend the application of vision-language models (VLMs) from web images to sensor-mediated physical environments, we propose Multi-View Physical-prompt for Test-Time Adaptation (MVP), a forward-only framework that moves test-time adaptation (TTA) from tokens to photons by treating the camera exposure triangle--ISO, shutter speed, and aperture--as physical prompts. At inference, MVP acquires a library of physical views per scene, selects the top-k sensor settings using a source-affinity score, evaluates each retained view under lightweight digital augmentations, filters the lowest-entropy subset of augmented views, and aggregates predictions with Zero-temperature softmax (i.e., hard voting). This selection-then-vote design is simple, calibration-friendly, and requires no gradients or model modifications. On ImageNet-ES and ImageNet-ES-Diverse, MVP consistently outperforms digital-only TTA on single Auto-Exposure captures, by up to 25.6 percentage points (pp), and delivers up to 3.4 pp additional gains over pipelines that combine conventional sensor control with TTA. MVP remains effective under reduced parameter candidate sets that lower capture latency, demonstrating practicality. These results support the main claim that, beyond post-capture prompting, measurement-time control--selecting and combining real physical views--substantially improves robustness for VLMs.
Problem

Research questions and friction points this paper is trying to address.

Adapting vision-language models to physical sensor environments
Optimizing camera settings as physical prompts for test-time adaptation
Improving robustness by selecting and combining real physical views
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physical prompting via camera exposure triangle
Selection-then-vote with source-affinity and entropy filtering
Gradient-free test-time adaptation using multiple sensor settings
🔎 Similar Papers
No similar papers found.
B
Boyeong Im
Seoul National University
W
Wooseok Lee
Seoul National University
Y
Yoojin Kwon
Seoul National University
Hyung-Sin Kim
Hyung-Sin Kim
Seoul National University, Data Science
On-device AIMachine learningComputer visionInternet of Things