🤖 AI Summary
This work addresses the underutilized semantic information embedded in cross-attention maps across different attention heads of current text-to-image diffusion models, which limits visual interpretability. To enhance model transparency, the authors propose a concept-aware attention head selection mechanism that evaluates the relevance of each attention head to the target prompt tokens and selectively aggregates the most pertinent attention maps. This approach not only significantly improves the mean Intersection-over-Union (IoU) score for semantic segmentation—outperforming existing methods such as DAAM—but also effectively identifies and diagnoses instances where the model misinterprets textual prompts. By doing so, it offers a novel pathway toward understanding the internal mechanisms of diffusion models and their alignment with linguistic inputs.
📝 Abstract
Numerous studies on text-to-image (T2I) generative models have utilized cross-attention maps to boost application performance and interpret model behavior. However, the distinct characteristics of attention maps from different attention heads remain relatively underexplored. In this study, we show that selectively aggregating cross-attention maps from heads most relevant to a target concept can improve visual interpretability. Compared to the diffusion-based segmentation method DAAM, our approach achieves higher mean IoU scores. We also find that the most relevant heads capture concept-specific features more accurately than the least relevant ones, and that selective aggregation helps diagnose prompt misinterpretations. These findings suggest that attention head selection offers a promising direction for improving the interpretability and controllability of T2I generation.