🤖 AI Summary
Vision-to-music generation remains in its infancy, facing core challenges including difficulty in cross-modal dynamic modeling, weak alignment mechanisms, and the absence of standardized evaluation frameworks. This paper presents the first systematic survey of this emerging subfield, covering the full technical stack—from image/video (including human motion) inputs to symbolic (MIDI) and audio music generation—and establishes a unified analytical framework integrating architecture design, benchmark datasets, and evaluation metrics. We propose three novel research directions: (1) dynamic temporal modeling, (2) fine-grained cross-modal alignment, and (3) interpretable evaluation systems—leveraging advances in multimodal representation learning, hybrid Transformer/CNN-RNN temporal architectures, and diffusion-based audio synthesis. Concurrently, we release *Awesome-Vision-to-Music-Generation*, the field’s first structured knowledge repository, serving as an authoritative reference and technical roadmap for both academic research and applications such as film scoring and short-video content creation.
📝 Abstract
Vision-to-music Generation, including video-to-music and image-to-music tasks, is a significant branch of multimodal artificial intelligence demonstrating vast application prospects in fields such as film scoring, short video creation, and dance music synthesis. However, compared to the rapid development of modalities like text and images, research in vision-to-music is still in its preliminary stage due to its complex internal structure and the difficulty of modeling dynamic relationships with video. Existing surveys focus on general music generation without comprehensive discussion on vision-to-music. In this paper, we systematically review the research progress in the field of vision-to-music generation. We first analyze the technical characteristics and core challenges for three input types: general videos, human movement videos, and images, as well as two output types of symbolic music and audio music. We then summarize the existing methodologies on vision-to-music generation from the architecture perspective. A detailed review of common datasets and evaluation metrics is provided. Finally, we discuss current challenges and promising directions for future research. We hope our survey can inspire further innovation in vision-to-music generation and the broader field of multimodal generation in academic research and industrial applications. To follow latest works and foster further innovation in this field, we are continuously maintaining a GitHub repository at https://github.com/wzk1015/Awesome-Vision-to-Music-Generation.