LAV: Audio-Driven Dynamic Visual Generation with Neural Compression and StyleGAN2

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of generating high-fidelity, semantically consistent dynamic visual sequences driven by pre-recorded audio. We propose an end-to-end cross-modal generative framework: input audio is first encoded into a semantically rich latent representation using the pretrained neural audio codec EnCodec; this latent code is then directly projected—via a lightweight, randomly initialized linear mapping—into the style space of StyleGAN2 to synthesize temporally coherent, style-controllable, high-quality image sequences. Our key contribution is the first direct, alignment-free coupling of the EnCodec latent space with the StyleGAN2 style space, eliminating conventional explicit feature-alignment modules. This design significantly improves audio–visual semantic consistency while enhancing generation flexibility. Extensive experiments demonstrate the effectiveness and computational efficiency of neural audio compression latents for cross-modal artistic generation.

Technology Category

Application Category

📝 Abstract
This paper introduces LAV (Latent Audio-Visual), a system that integrates EnCodec's neural audio compression with StyleGAN2's generative capabilities to produce visually dynamic outputs driven by pre-recorded audio. Unlike previous works that rely on explicit feature mappings, LAV uses EnCodec embeddings as latent representations, directly transformed into StyleGAN2's style latent space via randomly initialized linear mapping. This approach preserves semantic richness in the transformation, enabling nuanced and semantically coherent audio-visual translations. The framework demonstrates the potential of using pretrained audio compression models for artistic and computational applications.
Problem

Research questions and friction points this paper is trying to address.

Generates dynamic visuals from audio using neural compression
Transforms audio embeddings into StyleGAN2 latent space
Enables nuanced audio-visual translations for creative applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates EnCodec and StyleGAN2 for audio-driven visuals
Uses EnCodec embeddings as latent representations
Transforms audio into StyleGAN2's latent space linearly
🔎 Similar Papers
No similar papers found.
J
Jongmin Jung
Dept. of Artificial Intelligence, Sogang University, Seoul, South Korea
Dasaem Jeong
Dasaem Jeong
Sogang University
Music Information RetrievalExpressive Performance ModelingMachine Learning