EmoLat: Text-driven Image Sentiment Transfer via Emotion Latent Space

๐Ÿ“… 2026-01-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of cross-modal semantic and visual affect alignment in text-driven fine-grained image emotion transfer by proposing a novel paradigm. The approach constructs an emotion semantic graph to model complex relationships among emotions, objects, and visual attributes, and introduces EmoLatโ€”a latent emotional spaceโ€”and EmoSpace Set, a large-scale annotated dataset. High-fidelity, text-controllable emotional editing is achieved through cross-modal joint embedding, adversarial regularization, and multi-objective optimization. Experimental results demonstrate that the proposed method significantly outperforms existing techniques on EmoSpace Set, achieving state-of-the-art performance in both quantitative metrics and visual emotion transfer quality.

Technology Category

Application Category

๐Ÿ“ Abstract
We propose EmoLat, a novel emotion latent space that enables fine-grained, text-driven image sentiment transfer by modeling cross-modal correlations between textual semantics and visual emotion features. Within EmoLat, an emotion semantic graph is constructed to capture the relational structure among emotions, objects, and visual attributes. To enhance the discriminability and transferability of emotion representations, we employ adversarial regularization, aligning the latent emotion distributions across modalities. Building upon EmoLat, a cross-modal sentiment transfer framework is proposed to manipulate image sentiment via joint embedding of text and EmoLat features. The network is optimized using a multi-objective loss incorporating semantic consistency, emotion alignment, and adversarial regularization. To support effective modeling, we construct EmoSpace Set, a large-scale benchmark dataset comprising images with dense annotations on emotions, object semantics, and visual attributes. Extensive experiments on EmoSpace Set demonstrate that our approach significantly outperforms existing state-of-the-art methods in both quantitative metrics and qualitative transfer fidelity, establishing a new paradigm for controllable image sentiment editing guided by textual input. The EmoSpace Set and all the code are available at http://github.com/JingVIPLab/EmoLat.
Problem

Research questions and friction points this paper is trying to address.

image sentiment transfer
text-driven editing
emotion representation
cross-modal correlation
controllable image editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

emotion latent space
text-driven image sentiment transfer
cross-modal alignment
adversarial regularization
emotion semantic graph
๐Ÿ”Ž Similar Papers
No similar papers found.
Jing Zhang
Jing Zhang
East China University of Science and Technology
computer visionimage understanding
B
Bingjie Fan
Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai 200237, P. R. China
J
Jixiang Zhu
Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai 200237, P. R. China
Zhe Wang
Zhe Wang
Professor of Computer Science & Engineering, East China University of Science & Technology
Machine LearningPattern RecognitionMedical Data ProcessingImage AnalysisArtificial Intelligence