🤖 AI Summary
Existing open-source audio annotation tools struggle to capture the nuanced subjective differences in human perception of musical semantics, thereby hindering intent alignment between humans and machines in music information retrieval. To address this limitation, this work proposes LabelBuddy—an open-source, collaborative, AI-assisted audio annotation platform. Its key innovation lies in integrating a containerized backend architecture, a multi-user consensus mechanism, and a pluggable model interface that enables flexible integration of custom models—including large audio language models—for pre-annotation. Furthermore, the platform supports dynamic human-AI collaborative labeling through extensible AI agents. LabelBuddy provides a scalable infrastructure for community-driven semantic audio representation learning and iterative model development.
📝 Abstract
The advancement of Machine learning (ML), Large Audio Language Models (LALMs), and autonomous AI agents in Music Information Retrieval (MIR) necessitates a shift from static tagging to rich, human-aligned representation learning. However, the scarcity of open-source infrastructure capable of capturing the subjective nuances of audio annotation remains a critical bottleneck. This paper introduces \textbf{LabelBuddy}, an open-source collaborative auto-tagging audio annotation tool designed to bridge the gap between human intent and machine understanding. Unlike static tools, it decouples the interface from inference via containerized backends, allowing users to plug in custom models for AI-assisted pre-annotation. We describe the system architecture, which supports multi-user consensus, containerized model isolation, and a roadmap for extending agents and LALMs. Code available at https://github.com/GiannisProkopiou/gsoc2022-Label-buddy.