🤖 AI Summary
To address degraded microphone array-based sound source localization accuracy in outdoor low-SNR environments (down to 1 dB), this paper proposes a directional localization method synergizing a far-field microphone array with an asynchronous close-talking microphone. The core innovation lies in introducing, for the first time, a near-talk signal-guided coarse temporal alignment strategy, integrated with time-domain acoustic echo cancellation and ideal ratio mask estimation, enabling selective source separation and high-precision DOA estimation under strong interference. Experiments demonstrate an average angular error of only 4°, with 95% of localization errors ≤5°—significantly outperforming existing state-of-the-art methods. This approach establishes a robust, real-time acoustic sensing foundation for human–machine voice interaction on autonomous platforms operating in complex野外 scenarios.
📝 Abstract
This paper presents a sound source localization strategy that relies on a microphone array embedded in an unmanned ground vehicle and an asynchronous close-talking microphone near the operator. A signal coarse alignment strategy is combined with a time-domain acoustic echo cancellation algorithm to estimate a time-frequency ideal ratio mask to isolate the target speech from interferences and environmental noise. This allows selective sound source localization, and provides the robot with the direction of arrival of sound from the active operator, which enables rich interaction in noisy scenarios. Results demonstrate an average angle error of 4 degrees and an accuracy within 5 degrees of 95% at a signal-to-noise ratio of 1dB, which is significantly superior to the state-of-the-art localization methods.