π€ AI Summary
Existing multi-prompt visual in-context learning methods are constrained by patch-wise fusion frameworks and model-agnostic supervision, limiting their ability to fully exploit complementary information across prompts. This work proposes the first locality-aware multi-prompt fusion paradigm, which introduces a spatial priorβguided local fusion mechanism to jointly optimize focus, alignment, and prediction objectives. Coupled with tailored data augmentation strategies, the approach transcends conventional fusion limitations, enabling richer contextual modeling and stronger training signals. The method substantially outperforms current state-of-the-art approaches across three fundamental vision tasks and demonstrates exceptional generalization, transferability, and out-of-distribution robustness.
π Abstract
Visual In-Context Learning (VICL) aims to complete vision tasks by imitating pixel demonstrations. Recent work pioneered prompt fusion that combines the advantages of various demonstrations, which shows a promising way to extend VICL. Unfortunately, the patch-wise fusion framework and model-agnostic supervision hinder the exploitation of informative cues, thereby limiting performance gains. To overcome this deficiency, we introduce PromptHub, a framework that holistically strengthens multi-prompting through locality-aware fusion, concentration and alignment. PromptHub exploits spatial priors to capture richer contextual information, employs complementary concentration, alignment, and prediction objectives to mutually guide training, and incorporates data augmentation to further reinforce supervision. Extensive experiments on three fundamental vision tasks demonstrate the superiority of PromptHub. Moreover, we validate its universality, transferability, and robustness across out-of-distribution settings, and various retrieval scenarios. This work establishes a reliable locality-aware paradigm for prompt fusion, moving beyond prior patch-wise approaches. Code is available at https://github.com/luotc-why/ICLR26-PromptHub.