How Would It Sound? Material-Controlled Multimodal Acoustic Profile Generation for Indoor Scenes

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces a novel task—material-controllable room impulse response (RIR) generation—aiming to synthesize high-fidelity acoustic responses dynamically, conditioned on user-specified material configurations (e.g., floor, wall finishes) and multimodal audio-visual observations of indoor scenes. To support this, we present Acoustic Wonderland, the first acoustic dataset enabling fine-grained material combinations and synchronized multi-view audio-visual recordings. We further propose a new audio-visual–material fusion encoder-decoder architecture that explicitly models material properties and their geometric-acoustic mapping. Experiments demonstrate substantial improvements over existing baselines and state-of-the-art methods in RIR prediction accuracy, material sensitivity, and generation diversity. Notably, our approach enables real-time, interactive editing of material parameters during inference—a capability unprecedented in prior acoustic simulation frameworks.

Technology Category

Application Category

📝 Abstract
How would the sound in a studio change with a carpeted floor and acoustic tiles on the walls? We introduce the task of material-controlled acoustic profile generation, where, given an indoor scene with specific audio-visual characteristics, the goal is to generate a target acoustic profile based on a user-defined material configuration at inference time. We address this task with a novel encoder-decoder approach that encodes the scene's key properties from an audio-visual observation and generates the target Room Impulse Response (RIR) conditioned on the material specifications provided by the user. Our model enables the generation of diverse RIRs based on various material configurations defined dynamically at inference time. To support this task, we create a new benchmark, the Acoustic Wonderland Dataset, designed for developing and evaluating material-aware RIR prediction methods under diverse and challenging settings. Our results demonstrate that the proposed model effectively encodes material information and generates high-fidelity RIRs, outperforming several baselines and state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Generate acoustic profiles for indoor scenes based on material configurations
Predict Room Impulse Responses (RIRs) using audio-visual scene properties
Develop a benchmark dataset for material-aware RIR prediction methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encoder-decoder for audio-visual scene encoding
Dynamic RIR generation with user materials
New Acoustic Wonderland Dataset benchmark
🔎 Similar Papers
No similar papers found.