DetPO: In-Context Learning with Multi-Modal LLMs for Few-Shot Object Detection

πŸ“… 2026-03-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing multimodal large language models (MLLMs) struggle to generalize in few-shot object detection to out-of-distribution categories, tasks, or imaging modalities and fail to effectively integrate visual exemplars with textual descriptions for in-context learning. To address this, this work proposes DetPOβ€”a gradient-free, test-time black-box prompt optimization method that, for the first time, adapts black-box prompt tuning to few-shot object detection. DetPO significantly enhances MLLM performance solely by optimizing plain-text prompts and calibrating prediction confidence, without requiring model fine-tuning or access to internal parameters. Experiments demonstrate that DetPO substantially outperforms existing black-box approaches on the Roboflow20-VL and LVIS benchmarks, achieving up to a 9.7% improvement in detection accuracy.

Technology Category

Application Category

πŸ“ Abstract
Multi-Modal LLMs (MLLMs) demonstrate strong visual grounding capabilities on popular object detection benchmarks like OdinW-13 and RefCOCO. However, state-of-the-art models still struggle to generalize to out-of-distribution classes, tasks and imaging modalities not typically found in their pre-training. While in-context prompting is a common strategy to improve performance across diverse tasks, we find that it often yields lower detection accuracy than prompting with class names alone. This suggests that current MLLMs cannot yet effectively leverage few-shot visual examples and rich textual descriptions for object detection. Since frontier MLLMs are typically only accessible via APIs, and state-of-the-art open-weights models are prohibitively expensive to fine-tune on consumer-grade hardware, we instead explore black-box prompt optimization for few-shot object detection. To this end, we propose Detection Prompt Optimization (DetPO), a gradient-free test-time optimization approach that refines text-only prompts by maximizing detection accuracy on few-shot visual training examples while calibrating prediction confidence. Our proposed approach yields consistent improvements across generalist MLLMs on Roboflow20-VL and LVIS, outperforming prior black-box approaches by up to 9.7%. Our code is available at https://github.com/ggare-cmu/DetPO
Problem

Research questions and friction points this paper is trying to address.

few-shot object detection
multi-modal LLMs
out-of-distribution generalization
in-context learning
visual grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detection Prompt Optimization
few-shot object detection
multi-modal LLMs
black-box prompt optimization
in-context learning
πŸ”Ž Similar Papers
No similar papers found.
G
Gautam Rajendrakumar Gare
Carnegie Mellon University
Neehar Peri
Neehar Peri
Carnegie Mellon University
Computer VisionMachine LearningRobotics
M
Matvei Popov
Roboflow
S
Shruti Jain
Carnegie Mellon University
J
John Galeotti
Carnegie Mellon University
Deva Ramanan
Deva Ramanan
Professor, Robotics Institute, Carnegie Mellon University
Computer VisionMachine Learning