🤖 AI Summary
Existing text-to-image diffusion models struggle to precisely control both scene semantics and fine-grained emotional tone, largely due to the absence of a unified framework modeling affective dimensions and perceptual attributes. To address this limitation, this work introduces EmoScene, a large-scale dataset that establishes dual-space alignment between emotion—represented by continuous Valence-Arousal-Dominance (VAD) dimensions and discrete emotion labels—and scene-perception attributes, encompassing over 300 real-world scene categories and 1.2 million images. Building upon this dataset, the authors propose a lightweight, shallow cross-attention modulation mechanism that injects dual-space control signals into a frozen diffusion backbone, enabling efficient and emotionally controllable image generation. Experimental results demonstrate that dual-space supervision significantly enhances both emotional consistency and semantic controllability in generated images.
📝 Abstract
Text-to-image diffusion models have achieved high visual fidelity, yet precise control over scene semantics and fine-grained affective tone remains challenging. Human visual affect arises from the rapid integration of contextual meaning, including valence, arousal, and dominance, with perceptual cues such as color harmony, luminance contrast, texture variation, curvature, and spatial layout. However, current text-to-image models rarely represent affective and perceptual factors within a unified representation, which limits their ability to synthesize scenes with coherent and nuanced emotional intent. To address this gap, we construct EmoScene, a large-scale dual-space emotion dataset that jointly encodes affective dimensions and perceptual attributes, with contextual semantics provided as supporting annotations. EmoScene contains 1.2M images across more than three hundred real-world scene categories, each annotated with discrete emotion labels, continuous VAD values, perceptual descriptors and textual captions. Multi-space analyses reveal how discrete emotions occupy the VAD space and how affect systematically correlates with scene-level perceptual factors. To benchmark EmoScene, we provide a lightweight reference baseline that injects dual-space controls into a frozen diffusion backbone via shallow cross-attention modulation, serving as a reproducible probe of affect controllability enabled by dual-space supervision.