🤖 AI Summary
Personalized HRTF modeling commonly relies on latent spaces optimized for spectral reconstruction, yet these spaces lack alignment with human auditory perception, leading to perceptually distorted spatial audio. Method: This work first systematically evaluates the correlation between existing HRTF latent representations and objective psychoacoustic metrics—including interaural level/time difference (ILD/ITD) sensitivity and spectral difference perception models. We then propose a perception-guided HRTF embedding framework that integrates auditory perceptual metrics into deep learning: (i) a perception-aware metric loss function based on perceptual similarity, and (ii) metric multidimensional scaling (MMDS) to align the geometry of the latent space with empirically grounded perceptual relationships. Contribution/Results: Experiments demonstrate that our method significantly outperforms conventional approaches in both HRTF reconstruction accuracy and perceptual consistency, yielding substantial improvements in the realism and auditory naturalness of personalized spatial audio.
📝 Abstract
Personalized head-related transfer functions (HRTFs) are essential for ensuring a realistic auditory experience over headphones, because they take into account individual anatomical differences that affect listening. Most machine learning approaches to HRTF personalization rely on a learned low-dimensional latent space to generate or select custom HRTFs for a listener. However, these latent representations are typically learned in a manner that optimizes for spectral reconstruction but not for perceptual compatibility, meaning they may not necessarily align with perceptual distance. In this work, we first study whether traditionally learned HRTF representations are well correlated with perceptual relations using auditory-based objective perceptual metrics; we then propose a method for explicitly embedding HRTFs into a perception-informed latent space, leveraging a metric-based loss function and supervision via Metric Multidimensional Scaling (MMDS). Finally, we demonstrate the applicability of these learned representations to the task of HRTF personalization. We suggest that our method has the potential to render personalized spatial audio, leading to an improved listening experience.