๐ค AI Summary
This work addresses the challenge of cross-modal semantic and visual affect alignment in text-driven fine-grained image emotion transfer by proposing a novel paradigm. The approach constructs an emotion semantic graph to model complex relationships among emotions, objects, and visual attributes, and introduces EmoLatโa latent emotional spaceโand EmoSpace Set, a large-scale annotated dataset. High-fidelity, text-controllable emotional editing is achieved through cross-modal joint embedding, adversarial regularization, and multi-objective optimization. Experimental results demonstrate that the proposed method significantly outperforms existing techniques on EmoSpace Set, achieving state-of-the-art performance in both quantitative metrics and visual emotion transfer quality.
๐ Abstract
We propose EmoLat, a novel emotion latent space that enables fine-grained, text-driven image sentiment transfer by modeling cross-modal correlations between textual semantics and visual emotion features. Within EmoLat, an emotion semantic graph is constructed to capture the relational structure among emotions, objects, and visual attributes. To enhance the discriminability and transferability of emotion representations, we employ adversarial regularization, aligning the latent emotion distributions across modalities. Building upon EmoLat, a cross-modal sentiment transfer framework is proposed to manipulate image sentiment via joint embedding of text and EmoLat features. The network is optimized using a multi-objective loss incorporating semantic consistency, emotion alignment, and adversarial regularization. To support effective modeling, we construct EmoSpace Set, a large-scale benchmark dataset comprising images with dense annotations on emotions, object semantics, and visual attributes. Extensive experiments on EmoSpace Set demonstrate that our approach significantly outperforms existing state-of-the-art methods in both quantitative metrics and qualitative transfer fidelity, establishing a new paradigm for controllable image sentiment editing guided by textual input. The EmoSpace Set and all the code are available at http://github.com/JingVIPLab/EmoLat.