🤖 AI Summary
Chord estimation from audio (ACE) faces two key challenges: subjective labeling and severe class imbalance. To address these, we propose a consonance-aware ACE framework. First, we construct an acoustic consonance-based chord distance metric to quantify labeling inconsistency. Second, we design a decomposed output architecture that separately models root note, bass note, and note activation. Third, we introduce a consonance-driven label smoothing strategy that incorporates harmonic priors during training. Our model adopts a Conformer backbone and jointly optimizes multiple tasks. Experiments on standard benchmarks demonstrate significant improvements in accuracy and robustness, surpassing prior state-of-the-art methods. The proposed approach breaks existing performance bottlenecks and establishes a new paradigm for ACE—offering both interpretability and strong generalization across diverse musical contexts.
📝 Abstract
Audio Chord Estimation (ACE) holds a pivotal role in music information research, having garnered attention for over two decades due to its relevance for music transcription and analysis. Despite notable advancements, challenges persist in the task, particularly concerning unique characteristics of harmonic content, which have resulted in existing systems' performances reaching a glass ceiling. These challenges include annotator subjectivity, where varying interpretations among annotators lead to inconsistencies, and class imbalance within chord datasets, where certain chord classes are over-represented compared to others, posing difficulties in model training and evaluation. As a first contribution, this paper presents an evaluation of inter-annotator agreement in chord annotations, using metrics that extend beyond traditional binary measures. In addition, we propose a consonance-informed distance metric that reflects the perceptual similarity between harmonic annotations. Our analysis suggests that consonance-based distance metrics more effectively capture musically meaningful agreement between annotations. Expanding on these findings, we introduce a novel ACE conformer-based model that integrates consonance concepts into the model through consonance-based label smoothing. The proposed model also addresses class imbalance by separately estimating root, bass, and all note activations, enabling the reconstruction of chord labels from decomposed outputs.