ADaFuSE: Adaptive Diffusion-generated Image and Text Fusion for Interactive Text-to-Image Retrieval

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation in existing interactive text-to-image retrieval (I-TIR) methods caused by static fusion of multimodal views generated via diffusion models, which are highly susceptible to generation noise. To overcome this limitation, we propose a lightweight adaptive fusion mechanism that integrates a dual-branch architecture with dynamic gating and a semantic-aware mixture-of-experts (MoE) module. This design enables dynamic alignment and calibration of diffusion-enhanced multimodal feedback without modifying the backbone encoders, allowing seamless integration into existing frameworks. Our approach adaptively balances modality reliability and captures fine-grained cross-modal semantics, effectively mitigating noise interference. Extensive experiments demonstrate state-of-the-art performance across four standard I-TIR benchmarks, achieving up to a 3.49% improvement in Hits@10 with only a 5.29% increase in parameters, while exhibiting enhanced robustness to noisy inputs and long queries.

Technology Category

Application Category

📝 Abstract
Recent advances in interactive text-to-image retrieval (I-TIR) use diffusion models to bridge the modality gap between the textual information need and the images to be searched, resulting in increased effectiveness. However, existing frameworks fuse multi-modal views of user feedback by simple embedding addition. In this work, we show that this static and undifferentiated fusion indiscriminately incorporates generative noise produced by the diffusion model, leading to performance degradation for up to 55.62% samples. We further propose ADaFuSE (Adaptive Diffusion-Text Fusion with Semantic-aware Experts), a lightweight fusion model designed to align and calibrate multi-modal views for diffusion-augmented I-TIR, which can be plugged into existing frameworks without modifying the backbone encoder. Specifically, we introduce a dual-branch fusion mechanism that employs an adaptive gating branch to dynamically balance modality reliability, alongside a semantic-aware mixture-of-experts branch to capture fine-grained cross-modal nuances. Via thorough evaluation over four standard I-TIR benchmarks, ADaFuSE achieves state-of-the-art performance, surpassing DAR by up to 3.49% in Hits@10 with only a 5.29% parameter increase, while exhibiting stronger robustness to noisy and longer interactive queries. These results show that generative augmentation coupled with principled fusion provides a simple, generalizable alternative to fine-tuning for interactive retrieval.
Problem

Research questions and friction points this paper is trying to address.

interactive text-to-image retrieval
diffusion models
multi-modal fusion
generative noise
modality gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive fusion
diffusion models
interactive retrieval
mixture-of-experts
cross-modal alignment
🔎 Similar Papers
No similar papers found.
Zhuocheng Zhang
Zhuocheng Zhang
Institute of Computing Technology, Chinese Academy of Science
Natural Language Processing
X
Xingwu Zhang
Hunan University
K
Kangheng Liang
University of Glasgow
G
Guanxuan Li
Hunan University
R
Richard Mccreadie
University of Glasgow
Z
Zijun Long
Hunan University