🤖 AI Summary
Evaluating perceptual consistency in music-induced painting remains challenging: existing methods rely on emotion recognition models, which suffer from high noise and neglect multidimensional perceptual cues. This paper introduces the first assessment framework that directly models perceptual coherence between music and painting. Its core contributions are threefold: (1) construction of MPD, a large-scale, expert-annotated music-painting pair dataset; (2) design of a modulation fusion mechanism that dynamically injects music features into a visual encoder; and (3) adoption of a preference-based training strategy coupled with Direct Preference Optimization (DPO) to robustly handle annotation ambiguity. Experiments demonstrate that our method significantly outperforms baselines, achieving superior accuracy and robustness both in localizing music-correlated visual regions and in holistic consistency evaluation.
📝 Abstract
Music induced painting is a unique artistic practice, where visual artworks are created under the influence of music. Evaluating whether a painting faithfully reflects the music that inspired it poses a challenging perceptual assessment task. Existing methods primarily rely on emotion recognition models to assess the similarity between music and painting, but such models introduce considerable noise and overlook broader perceptual cues beyond emotion. To address these limitations, we propose a novel framework for music induced painting assessment that directly models perceptual coherence between music and visual art. We introduce MPD, the first large scale dataset of music painting pairs annotated by domain experts based on perceptual coherence. To better handle ambiguous cases, we further collect pairwise preference annotations. Building on this dataset, we present MPJudge, a model that integrates music features into a visual encoder via a modulation based fusion mechanism. To effectively learn from ambiguous cases, we adopt Direct Preference Optimization for training. Extensive experiments demonstrate that our method outperforms existing approaches. Qualitative results further show that our model more accurately identifies music relevant regions in paintings.