Enhancing XR Auditory Realism via Multimodal Scene-Aware Acoustic Rendering

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing XR spatial audio rendering struggles to adapt in real time to diverse physical environments, resulting in audio-visual misalignment and degraded immersion. To address this, we propose SAMOSA—a novel system that introduces a multimodal scene representation integrating room geometry, surface materials, and semantic acoustic context, enabling high-fidelity, low-latency acoustic adaptation directly on edge devices. Methodologically, SAMOSA unifies multimodal sensor fusion, real-time room impulse response (RIR) synthesis, and semantics-driven acoustic calibration—overcoming the limitations of conventional single-modality modeling. Experiments demonstrate significantly improved RIR synthesis accuracy across varied room configurations and sound source types. A user study with 12 domain experts confirms substantial gains in auditory realism and immersive experience in XR. This work establishes a new paradigm for lightweight, semantics-aware spatial audio rendering.

Technology Category

Application Category

📝 Abstract
In Extended Reality (XR), rendering sound that accurately simulates real-world acoustics is pivotal in creating lifelike and believable virtual experiences. However, existing XR spatial audio rendering methods often struggle with real-time adaptation to diverse physical scenes, causing a sensory mismatch between visual and auditory cues that disrupts user immersion. To address this, we introduce SAMOSA, a novel on-device system that renders spatially accurate sound by dynamically adapting to its physical environment. SAMOSA leverages a synergistic multimodal scene representation by fusing real-time estimations of room geometry, surface materials, and semantic-driven acoustic context. This rich representation then enables efficient acoustic calibration via scene priors, allowing the system to synthesize a highly realistic Room Impulse Response (RIR). We validate our system through technical evaluation using acoustic metrics for RIR synthesis across various room configurations and sound types, alongside an expert evaluation (N=12). Evaluation results demonstrate SAMOSA's feasibility and efficacy in enhancing XR auditory realism.
Problem

Research questions and friction points this paper is trying to address.

XR spatial audio struggles with real-time adaptation to physical scenes
Sensory mismatch between visual and auditory cues disrupts user immersion
Existing methods lack dynamic adaptation to diverse room environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic adaptation to physical environment for spatial sound
Multimodal scene representation fusing geometry and materials
Efficient acoustic calibration using scene priors for RIR synthesis
🔎 Similar Papers
No similar papers found.
T
Tianyu Xu
Google, Mountain View, CA, USA
J
Jihan Li
Google, Mountain View, CA, USA
P
Penghe Zu
Google, Mountain View, CA, USA
P
Pranav Sahay
Google, Mountain View, CA, USA
Maruchi Kim
Maruchi Kim
Phd Student, University of Washington
J
Jack Obeng-Marnu
Google, San Francisco, CA, USA
F
Farley Miller
Google, Mountain View, CA, USA
Xun Qian
Xun Qian
Google
Human-Computer InteractionAugmented RealityExtended RealityHuman-AI Interaction
K
Katrina Passarella
Google, San Francisco, CA, USA
M
Mahitha Rachumalla
Google, Mountain View, CA, USA
R
Rajeev Nongpiur
Google, Mountain View, CA, USA
D. Shin
D. Shin
Google DeepMind
Artificial Intelligence