🤖 AI Summary
Existing neural field (NF) methods for few-shot room impulse response (RIR) modeling are largely restricted to omnidirectional single- or dual-channel responses, failing to capture the directional characteristics of real acoustic fields. To address this, we propose Direction-Aware Neural Fields (DANF), the first NF framework explicitly incorporating directional priors: it constructs a continuous, direction-sensitive implicit acoustic field representation using Ambisonic-format RIRs; introduces a direction-aware loss function to explicitly enforce spherical response consistency; and employs low-rank adaptation (LoRA) for rapid cross-room generalization. Experiments demonstrate that DANF achieves high-fidelity interpolation of both omnidirectional and directional RIRs from minimal measurements. In spatial audio synthesis, it significantly improves sound source localization accuracy and perceptual realism. DANF establishes a novel paradigm for few-shot 3D acoustic field modeling.
📝 Abstract
The characteristics of a sound field are intrinsically linked to the geometric and spatial properties of the environment surrounding a sound source and a listener. The physics of sound propagation is captured in a time-domain signal known as a room impulse response (RIR). Prior work using neural fields (NFs) has allowed learning spatially-continuous representations of RIRs from finite RIR measurements. However, previous NF-based methods have focused on monaural omnidirectional or at most binaural listeners, which does not precisely capture the directional characteristics of a real sound field at a single point. We propose a direction-aware neural field (DANF) that more explicitly incorporates the directional information by Ambisonic-format RIRs. While DANF inherently captures spatial relations between sources and listeners, we further propose a direction-aware loss. In addition, we investigate the ability of DANF to adapt to new rooms in various ways including low-rank adaptation.