Anti-aliasing of neural distortion effects via model fine tuning

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address aliasing artifacts in neural modeling of guitar distortion—caused by nonlinear activations violating the Nyquist–Shannon sampling theorem—this paper proposes a Nyquist-aware teacher–student fine-tuning framework. The teacher model’s weights are frozen, while the student model is trained exclusively on harmonically purified spectral representations, obtained via spectral decomposition and harmonic component extraction. This work pioneers the integration of spectral purification with knowledge distillation–based fine-tuning, enabling effective aliasing suppression without real-time oversampling. Experiments demonstrate that the proposed method outperforms a 2× oversampling baseline across most test cases. Notably, an LSTM-based student achieves the optimal trade-off between aliasing reduction and perceptual fidelity to analog hardware. Overall, this study establishes a new paradigm for high-fidelity, computationally efficient neural modeling of audio nonlinearities.

Technology Category

Application Category

📝 Abstract
Neural networks have become ubiquitous with guitar distortion effects modelling in recent years. Despite their ability to yield perceptually convincing models, they are susceptible to frequency aliasing when driven by high frequency and high gain inputs. Nonlinear activation functions create both the desired harmonic distortion and unwanted aliasing distortion as the bandwidth of the signal is expanded beyond the Nyquist frequency. Here, we present a method for reducing aliasing in neural models via a teacher-student fine tuning approach, where the teacher is a pre-trained model with its weights frozen, and the student is a copy of this with learnable parameters. The student is fine-tuned against an aliasing-free dataset generated by passing sinusoids through the original model and removing non-harmonic components from the output spectra. Our results show that this method significantly suppresses aliasing for both long-short-term-memory networks (LSTM) and temporal convolutional networks (TCN). In the majority of our case studies, the reduction in aliasing was greater than that achieved by two times oversampling. One side-effect of the proposed method is that harmonic distortion components are also affected. This adverse effect was found to be model-dependent, with the LSTM models giving the best balance between anti-aliasing and preserving the perceived similarity to an analog reference device.
Problem

Research questions and friction points this paper is trying to address.

Reducing frequency aliasing in neural guitar distortion models
Using teacher-student fine-tuning to suppress unwanted aliasing artifacts
Balancing anti-aliasing and harmonic preservation in LSTM/TCN models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Teacher-student fine tuning reduces neural aliasing
Aliasing-free dataset trains student model
Method outperforms two times oversampling
🔎 Similar Papers
No similar papers found.