Selective Aggregation of Attention Maps Improves Diffusion-Based Visual Interpretation

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the underutilized semantic information embedded in cross-attention maps across different attention heads of current text-to-image diffusion models, which limits visual interpretability. To enhance model transparency, the authors propose a concept-aware attention head selection mechanism that evaluates the relevance of each attention head to the target prompt tokens and selectively aggregates the most pertinent attention maps. This approach not only significantly improves the mean Intersection-over-Union (IoU) score for semantic segmentation—outperforming existing methods such as DAAM—but also effectively identifies and diagnoses instances where the model misinterprets textual prompts. By doing so, it offers a novel pathway toward understanding the internal mechanisms of diffusion models and their alignment with linguistic inputs.
📝 Abstract
Numerous studies on text-to-image (T2I) generative models have utilized cross-attention maps to boost application performance and interpret model behavior. However, the distinct characteristics of attention maps from different attention heads remain relatively underexplored. In this study, we show that selectively aggregating cross-attention maps from heads most relevant to a target concept can improve visual interpretability. Compared to the diffusion-based segmentation method DAAM, our approach achieves higher mean IoU scores. We also find that the most relevant heads capture concept-specific features more accurately than the least relevant ones, and that selective aggregation helps diagnose prompt misinterpretations. These findings suggest that attention head selection offers a promising direction for improving the interpretability and controllability of T2I generation.
Problem

Research questions and friction points this paper is trying to address.

attention maps
text-to-image generation
visual interpretability
diffusion models
attention heads
Innovation

Methods, ideas, or system contributions that make the work stand out.

selective aggregation
cross-attention maps
diffusion models
visual interpretability
attention head selection
Jungwon Park
Jungwon Park
Seoul National University
Physical ChemistryNanomaterialsMicroscopyMaterials Engineering
J
Jungmin Ko
Interdisciplinary Program in Artificial Intelligence, Seoul National University, Seoul, Republic of Korea
D
Dongnam Byun
Department of Intelligence and Information, Seoul National University, Seoul, Republic of Korea
Wonjong Rhee
Wonjong Rhee
Seoul National University
Deep Learning TheoryArtificial IntelligenceInformation Theory