RT-VLM: Re-Thinking Vision Language Model with 4-Clues for Real-World Object Recognition Robustness

📅 2025-08-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Domain shifts—such as image statistics variations, viewpoint/occlusion differences, and inter-class visual ambiguity—severely degrade the robustness of object recognition models in real-world scenarios. To address this, we propose RT-VLM: (1) a 4-Clues synthetic data paradigm integrating bounding boxes, category names, object-level, and scene-level textual descriptions; (2) parameter-efficient multi-task fine-tuning of Llama-3.2-11B-Vision-Instruct; and (3) a two-stage self-reflection reasoning mechanism that explicitly models and resolves multimodal evidence conflicts during inference. RT-VLM is the first framework to jointly leverage structured multimodal cue generation and explicit self-correction, balancing interpretability and cross-domain generalization. Evaluated on multiple isolated domain-shift robustness benchmarks, RT-VLM consistently outperforms strong baselines, demonstrating superior stability and transferability for object recognition under complex real-world conditions.

Technology Category

Application Category

📝 Abstract
Real world deployments often expose modern object recognition models to domain shifts that precipitate a severe drop in accuracy. Such shifts encompass (i) variations in low level image statistics, (ii) changes in object pose and viewpoint, (iii) partial occlusion, and (iv) visual confusion across adjacent classes. To mitigate this degradation, we introduce the Re-Thinking Vision Language Model (RT-VLM) framework. The foundation of this framework is a unique synthetic dataset generation pipeline that produces images annotated with "4-Clues": precise bounding boxes, class names, detailed object-level captions, and a comprehensive context-level caption for the entire scene. We then perform parameter efficient supervised tuning of Llama 3.2 11B Vision Instruct on this resource. At inference time, a two stage Re-Thinking scheme is executed: the model first emits its own four clues, then re examines these responses as evidence and iteratively corrects them. Across robustness benchmarks that isolate individual domain shifts, RT-VLM consistently surpasses strong baselines. These findings indicate that the integration of structured multimodal evidence with an explicit self critique loop constitutes a promising route toward reliable and transferable visual understanding.
Problem

Research questions and friction points this paper is trying to address.

Addressing domain shifts in object recognition accuracy
Mitigating performance drop from occlusion and viewpoint changes
Improving robustness against visual confusion and image variations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic dataset generation with 4-clue annotations
Parameter-efficient tuning of Llama 3.2 Vision model
Two-stage self-critique inference with iterative correction
🔎 Similar Papers
No similar papers found.
J
Junghyun Park
Department of Artificial Intelligence, Graduate School, Konkuk University, Seoul, South Korea
Tuan Anh Nguyen
Tuan Anh Nguyen
Postdoctoral researcher, University of Groningen
Wireless sensor networksActivity RecognitionContext-aware SystemsEnergy-efficient buildings
D
Dugki Min
Department of Artificial Intelligence, Graduate School, Konkuk University, Seoul, South Korea