- Exploring Resolution-Wise Shared Attention in Hybrid Mamba-U-Nets for Improved Cross-Corpus Speech Enhancement
- MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement
- xLSTM-SENet: xLSTM for Single-Channel Speech Enhancement
- Detecting and Defending Against Adversarial Attacks on Automatic Speech Recognition via Diffusion Models
Awards:
- 2025 May: Extension of DeiC compute grant with additional resources for the LUMI Supercomputer
- 2025 January: Awarded DeiC compute grant for access to the LUMI Supercomputer
- 2025 IEEE ICASSP Conference Oral Presentation
- 2023 AAU SEMCON (7th Semester conference) winner out of 22 participating groups
Research Experience
Position: Ph.D. Fellow; Work Experience: Conducting research at the Department of Electronic Systems, Aalborg University, and the Centre for Acoustic Signal Processing Research (CASPR).
Education
Degree: Ph.D. Fellow (4+4 joint long master's thesis and Ph.D.); University: Department of Electronic Systems, Aalborg University and Centre for Acoustic Signal Processing Research (CASPR); Advisors: Prof. Zheng-Hua Tan, Prof. Jan Østergaard, Prof. Jesper Jensen; Time: Started as Ph.D. Fellow in September 2024.
Background
Research Interests: Developing and evaluating the newest sequence modelling neural networks like Mamba and xLSTM for single-channel speech enhancement. Professional Field: Electronic Systems, Acoustic Signal Processing.