🤖 AI Summary
This study addresses the challenge of achieving high-fidelity, personalized binaural audio rendering via deep learning, with emphasis on improving spatial localization accuracy, externalization, and immersion. To overcome limitations of conventional HRTF personalization—namely reliance on dense acoustic measurements or restrictive geometric assumptions—we propose two complementary approaches: (1) multimodal deep HRTF prediction leveraging sparse HRTF samples, head morphology, and visual/text/parametric cues; and (2) end-to-end binaural waveform generation. We systematically survey prevalent datasets and evaluation metrics, establishing a reproducible benchmark for comparative analysis. Key bottlenecks—including cross-modal misalignment, computational inefficiency for real-time deployment, and lack of physiological plausibility in learned representations—are identified. We further highlight cross-modal alignment, model lightweighting, and biologically grounded HRTF modeling as critical future directions. Our work advances spatial audio systems toward higher fidelity, reduced acquisition cost, and improved generalizability across users and scenarios.
📝 Abstract
Personalized binaural audio reproduction is the basis of realistic spatial localization, sound externalization, and immersive listening, directly shaping user experience and listening effort. This survey reviews recent advances in deep learning for this task and organizes them by generation mechanism into two paradigms: explicit personalized filtering and end-to-end rendering. Explicit methods predict personalized head-related transfer functions (HRTFs) from sparse measurements, morphological features, or environmental cues, and then use them in the conventional rendering pipeline. End-to-end methods map source signals directly to binaural signals, aided by other inputs such as visual, textual, or parametric guidance, and they learn personalization within the model. We also summarize the field's main datasets and evaluation metrics to support fair and repeatable comparison. Finally, we conclude with a discussion of key applications enabled by these technologies, current technical limitations, and potential research directions for deep learning-based spatial audio systems.