State Space and Self-Attention Collaborative Network with Feature Aggregation for DOA Estimation

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low direction-of-arrival (DOA) estimation accuracy caused by dynamic time-frequency feature variations and the difficulty of existing time-series models in balancing modeling capability with computational efficiency, this paper proposes a state-space–attention collaborative feature aggregation network. Methodologically, it innovatively integrates bidirectional Mamba modules with a lightweight Conformer architecture, incorporating a time-shift mechanism and squeeze-and-excitation–based feature aggregation to expand the receptive field while suppressing computational redundancy. The framework jointly models long-range temporal dependencies and salient frequency-band responses, significantly enhancing feature discriminability. Extensive experiments on multiple standard benchmarks demonstrate that the proposed method achieves superior DOA estimation accuracy—outperforming mainstream baselines by an average of 2.1°—while accelerating inference by 37%, thereby achieving a synergistic optimization of accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Accurate direction-of-arrival (DOA) estimation for sound sources is challenging due to the continuous changes in acoustic characteristics across time and frequency. In such scenarios, accurate localization relies on the ability to aggregate relevant features and model temporal dependencies effectively. In time series modeling, achieving a balance between model performance and computational efficiency remains a significant challenge. To address this, we propose FA-Stateformer, a state space and self-attention collaborative network with feature aggregation. The proposed network first employs a feature aggregation module to enhance informative features across both temporal and spectral dimensions. This is followed by a lightweight Conformer architecture inspired by the squeeze-and-excitation mechanism, where the feedforward layers are compressed to reduce redundancy and parameter overhead. Additionally, a temporal shift mechanism is incorporated to expand the receptive field of convolutional layers while maintaining a compact kernel size. To further enhance sequence modeling capabilities, a bidirectional Mamba module is introduced, enabling efficient state-space-based representation of temporal dependencies in both forward and backward directions. The remaining self-attention layers are combined with the Mamba blocks, forming a collaborative modeling framework that achieves a balance between representation capacity and computational efficiency. Extensive experiments demonstrate that FA-Stateformer achieves superior performance and efficiency compared to conventional architectures.
Problem

Research questions and friction points this paper is trying to address.

Addresses accurate sound source localization under changing acoustic conditions
Balances model performance with computational efficiency in time series
Enhances feature aggregation and temporal dependency modeling capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature aggregation module enhances temporal-spectral features
Lightweight Conformer with compressed feedforward reduces parameters
Bidirectional Mamba enables efficient state-space sequence modeling
🔎 Similar Papers
No similar papers found.