PAVAS: Physics-Aware Video-to-Audio Synthesis

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video-to-audio (V2A) models predominantly rely on appearance-based statistical correlations, neglecting the underlying physical mechanisms of sound generation—resulting in synthesized audio that lacks realism and physical plausibility. To address this, we propose the Physics-Driven Audio Adapter, the first diffusion-based framework for physics-aware audiovisual joint modeling. Our method leverages a vision-language model to estimate object mass and employs segmentation-guided dynamic 3D reconstruction to infer motion velocity; these physically grounded parameters—mass, velocity, and their interactions—are explicitly injected into the latent diffusion process for audio generation. We introduce VGG-Impact, a novel physics-aware benchmark designed to evaluate physical consistency in V2A synthesis. On this benchmark, our approach substantially outperforms prior state-of-the-art methods, achieving a 12.6% improvement in the Audio-Physical Consistency Coefficient (APCC), thereby demonstrating strong alignment between generated audio and underlying physical properties.

Technology Category

Application Category

📝 Abstract
Recent advances in Video-to-Audio (V2A) generation have achieved impressive perceptual quality and temporal synchronization, yet most models remain appearance-driven, capturing visual-acoustic correlations without considering the physical factors that shape real-world sounds. We present Physics-Aware Video-to-Audio Synthesis (PAVAS), a method that incorporates physical reasoning into a latent diffusion-based V2A generation through the Physics-Driven Audio Adapter (Phy-Adapter). The adapter receives object-level physical parameters estimated by the Physical Parameter Estimator (PPE), which uses a Vision-Language Model (VLM) to infer the moving-object mass and a segmentation-based dynamic 3D reconstruction module to recover its motion trajectory for velocity computation. These physical cues enable the model to synthesize sounds that reflect underlying physical factors. To assess physical realism, we curate VGG-Impact, a benchmark focusing on object-object interactions, and introduce Audio-Physics Correlation Coefficient (APCC), an evaluation metric that measures consistency between physical and auditory attributes. Comprehensive experiments show that PAVAS produces physically plausible and perceptually coherent audio, outperforming existing V2A models in both quantitative and qualitative evaluations. Visit https://physics-aware-video-to-audio-synthesis.github.io for demo videos.
Problem

Research questions and friction points this paper is trying to address.

Synthesize audio from video using physical principles
Incorporate object mass and motion for sound generation
Evaluate physical realism in video-to-audio synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates physical reasoning into latent diffusion-based video-to-audio synthesis
Uses a Physics-Driven Audio Adapter with object-level physical parameters
Estimates mass and motion trajectory via Vision-Language Model and 3D reconstruction
🔎 Similar Papers
No similar papers found.