Representing Sounds as Neural Amplitude Fields: A Benchmark of Coordinate-MLPs and a Fourier Kolmogorov-Arnold Framework

📅 2025-04-11
🏛️ AAAI Conference on Artificial Intelligence
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing Coordinate-MLPs for audio implicit representation lack systematic investigation and suffer from sensitivity to hyperparameters, reliance on complex positional encodings, and fragile initialization schemes. This work establishes the first comprehensive benchmark for audio signals using Coordinate-MLPs, evaluating combinations of three positional encoding strategies and sixteen activation functions. Furthermore, it introduces Fourier-ASR, a novel framework grounded in Fourier series and the Kolmogorov–Arnold representation theorem, which incorporates a Fourier-KAN network and a frequency-adaptive learning strategy (FaLS) to achieve robust audio representation without any additional positional encoding. Experiments demonstrate that the proposed method significantly outperforms conventional Coordinate-MLPs on both speech and music datasets, effectively modeling high-frequency components and mitigating low-frequency overfitting—all without requiring meticulous hyperparameter tuning.

Technology Category

Application Category

📝 Abstract
Although Coordinate-MLP-based implicit neural representations have excelled in representing radiance fields, 3D shapes, and images, their application to audio signals remains underexplored. To fill this gap, we investigate existing implicit neural representations, from which we extract 3 types of positional encoding and 16 commonly used activation functions. Through combinatorial design, we establish the first benchmark for Coordinate-MLPs in audio signal representations. Our benchmark reveals that Coordinate-MLPs require complex hyperparameter tuning and frequency-dependent initialization, limiting their robustness. To address these issues, we propose Fourier-ASR, a novel framework based on the Fourier series theorem and the Kolmogorov-Arnold representation theorem. Fourier-ASR introduces Fourier Kolmogorov-Arnold Networks (Fourier-KAN), which leverage periodicity and strong nonlinearity to represent audio signals, eliminating the need for additional positional encoding. Furthermore, a Frequency-adaptive Learning Strategy (FaLS) is proposed to enhance the convergence of Fourier-KAN by capturing high-frequency components and preventing overfitting of low-frequency signals. Extensive experiments conducted on natural speech and music datasets reveal that: (1) well-designed positional encoding and activation functions in Coordinate-MLPs can effectively improve audio representation quality; and (2) Fourier-ASR can robustly represent complex audio signals without extensive hyperparameter tuning. Looking ahead, the continuity and infinite resolution of implicit audio representations make our research highly promising for tasks such as audio compression, synthesis, and generation.
Problem

Research questions and friction points this paper is trying to address.

implicit neural representation
audio signal representation
Coordinate-MLP
robustness
hyperparameter tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fourier-KAN
Implicit Neural Representation
Audio Signal Modeling
Frequency-adaptive Learning Strategy
Coordinate-MLP