HRTFformer: A Spatially-Aware Transformer for Personalized HRTF Upsampling in Immersive Audio Rendering

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Personalized HRTFs are difficult to deploy at scale due to the complexity of measurement, and existing upsampling methods suffer from insufficient long-range spatial consistency and auditory realism at high magnification factors. This paper proposes a Transformer-based HRTF upsampling framework operating in the spherical harmonic (SH) domain. It introduces a spatially aware attention mechanism and a neighborhood discrepancy loss to explicitly model long-range correlations on the spherical manifold. The method jointly optimizes SH-domain representation, perceptual localization constraints, and spectral distortion metrics. Experiments demonstrate that our approach significantly outperforms state-of-the-art methods in both objective spectral fidelity (e.g., SNR, PESQ) and subjective sound localization accuracy—particularly under high upsampling factors (≥8×), where it preserves superior spatial continuity and reconstruction fidelity.

Technology Category

Application Category

📝 Abstract
Personalized Head-Related Transfer Functions (HRTFs) are starting to be introduced in many commercial immersive audio applications and are crucial for realistic spatial audio rendering. However, one of the main hesitations regarding their introduction is that creating personalized HRTFs is impractical at scale due to the complexities of the HRTF measurement process. To mitigate this drawback, HRTF spatial upsampling has been proposed with the aim of reducing measurements required. While prior work has seen success with different machine learning (ML) approaches, these models often struggle with long-range spatial consistency and generalization at high upsampling factors. In this paper, we propose a novel transformer-based architecture for HRTF upsampling, leveraging the attention mechanism to better capture spatial correlations across the HRTF sphere. Working in the spherical harmonic (SH) domain, our model learns to reconstruct high-resolution HRTFs from sparse input measurements with significantly improved accuracy. To enhance spatial coherence, we introduce a neighbor dissimilarity loss that promotes magnitude smoothness, yielding more realistic upsampling. We evaluate our method using both perceptual localization models and objective spectral distortion metrics. Experiments show that our model surpasses leading methods by a substantial margin in generating realistic, high-fidelity HRTFs.
Problem

Research questions and friction points this paper is trying to address.

Reducing personalized HRTF measurement complexity for scalable immersive audio
Improving spatial consistency in HRTF upsampling with transformer architecture
Enhancing high-fidelity HRTF reconstruction from sparse input measurements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer architecture for HRTF upsampling
Attention mechanism captures spatial correlations
Neighbor dissimilarity loss enhances spatial coherence
🔎 Similar Papers
No similar papers found.