EmoScene: A Dual-space Dataset for Controllable Affective Image Generation

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-image diffusion models struggle to precisely control both scene semantics and fine-grained emotional tone, largely due to the absence of a unified framework modeling affective dimensions and perceptual attributes. To address this limitation, this work introduces EmoScene, a large-scale dataset that establishes dual-space alignment between emotion—represented by continuous Valence-Arousal-Dominance (VAD) dimensions and discrete emotion labels—and scene-perception attributes, encompassing over 300 real-world scene categories and 1.2 million images. Building upon this dataset, the authors propose a lightweight, shallow cross-attention modulation mechanism that injects dual-space control signals into a frozen diffusion backbone, enabling efficient and emotionally controllable image generation. Experimental results demonstrate that dual-space supervision significantly enhances both emotional consistency and semantic controllability in generated images.
📝 Abstract
Text-to-image diffusion models have achieved high visual fidelity, yet precise control over scene semantics and fine-grained affective tone remains challenging. Human visual affect arises from the rapid integration of contextual meaning, including valence, arousal, and dominance, with perceptual cues such as color harmony, luminance contrast, texture variation, curvature, and spatial layout. However, current text-to-image models rarely represent affective and perceptual factors within a unified representation, which limits their ability to synthesize scenes with coherent and nuanced emotional intent. To address this gap, we construct EmoScene, a large-scale dual-space emotion dataset that jointly encodes affective dimensions and perceptual attributes, with contextual semantics provided as supporting annotations. EmoScene contains 1.2M images across more than three hundred real-world scene categories, each annotated with discrete emotion labels, continuous VAD values, perceptual descriptors and textual captions. Multi-space analyses reveal how discrete emotions occupy the VAD space and how affect systematically correlates with scene-level perceptual factors. To benchmark EmoScene, we provide a lightweight reference baseline that injects dual-space controls into a frozen diffusion backbone via shallow cross-attention modulation, serving as a reproducible probe of affect controllability enabled by dual-space supervision.
Problem

Research questions and friction points this paper is trying to address.

affective image generation
text-to-image diffusion models
emotion control
perceptual attributes
VAD space
Innovation

Methods, ideas, or system contributions that make the work stand out.

dual-space emotion representation
affective image generation
perceptual-affective correlation
text-to-image diffusion control
VAD annotation
🔎 Similar Papers
No similar papers found.