🤖 AI Summary
Personalized HRTFs are difficult to deploy at scale due to the complexity of measurement, and existing upsampling methods suffer from insufficient long-range spatial consistency and auditory realism at high magnification factors. This paper proposes a Transformer-based HRTF upsampling framework operating in the spherical harmonic (SH) domain. It introduces a spatially aware attention mechanism and a neighborhood discrepancy loss to explicitly model long-range correlations on the spherical manifold. The method jointly optimizes SH-domain representation, perceptual localization constraints, and spectral distortion metrics. Experiments demonstrate that our approach significantly outperforms state-of-the-art methods in both objective spectral fidelity (e.g., SNR, PESQ) and subjective sound localization accuracy—particularly under high upsampling factors (≥8×), where it preserves superior spatial continuity and reconstruction fidelity.
📝 Abstract
Personalized Head-Related Transfer Functions (HRTFs) are starting to be introduced in many commercial immersive audio applications and are crucial for realistic spatial audio rendering. However, one of the main hesitations regarding their introduction is that creating personalized HRTFs is impractical at scale due to the complexities of the HRTF measurement process. To mitigate this drawback, HRTF spatial upsampling has been proposed with the aim of reducing measurements required. While prior work has seen success with different machine learning (ML) approaches, these models often struggle with long-range spatial consistency and generalization at high upsampling factors. In this paper, we propose a novel transformer-based architecture for HRTF upsampling, leveraging the attention mechanism to better capture spatial correlations across the HRTF sphere. Working in the spherical harmonic (SH) domain, our model learns to reconstruct high-resolution HRTFs from sparse input measurements with significantly improved accuracy. To enhance spatial coherence, we introduce a neighbor dissimilarity loss that promotes magnitude smoothness, yielding more realistic upsampling. We evaluate our method using both perceptual localization models and objective spectral distortion metrics. Experiments show that our model surpasses leading methods by a substantial margin in generating realistic, high-fidelity HRTFs.