🤖 AI Summary
Domain shifts—such as image statistics variations, viewpoint/occlusion differences, and inter-class visual ambiguity—severely degrade the robustness of object recognition models in real-world scenarios. To address this, we propose RT-VLM: (1) a 4-Clues synthetic data paradigm integrating bounding boxes, category names, object-level, and scene-level textual descriptions; (2) parameter-efficient multi-task fine-tuning of Llama-3.2-11B-Vision-Instruct; and (3) a two-stage self-reflection reasoning mechanism that explicitly models and resolves multimodal evidence conflicts during inference. RT-VLM is the first framework to jointly leverage structured multimodal cue generation and explicit self-correction, balancing interpretability and cross-domain generalization. Evaluated on multiple isolated domain-shift robustness benchmarks, RT-VLM consistently outperforms strong baselines, demonstrating superior stability and transferability for object recognition under complex real-world conditions.
📝 Abstract
Real world deployments often expose modern object recognition models to domain shifts that precipitate a severe drop in accuracy. Such shifts encompass (i) variations in low level image statistics, (ii) changes in object pose and viewpoint, (iii) partial occlusion, and (iv) visual confusion across adjacent classes. To mitigate this degradation, we introduce the Re-Thinking Vision Language Model (RT-VLM) framework. The foundation of this framework is a unique synthetic dataset generation pipeline that produces images annotated with "4-Clues": precise bounding boxes, class names, detailed object-level captions, and a comprehensive context-level caption for the entire scene. We then perform parameter efficient supervised tuning of Llama 3.2 11B Vision Instruct on this resource. At inference time, a two stage Re-Thinking scheme is executed: the model first emits its own four clues, then re examines these responses as evidence and iteratively corrects them. Across robustness benchmarks that isolate individual domain shifts, RT-VLM consistently surpasses strong baselines. These findings indicate that the integration of structured multimodal evidence with an explicit self critique loop constitutes a promising route toward reliable and transferable visual understanding.