🤖 AI Summary
Traditional sonification methods typically map data directly to basic acoustic parameters (e.g., pitch, loudness), neglecting higher-level musical rhetoric—such as melody, harmony, and rhythm—resulting in ambiguous semantic interpretation and diminished aesthetic quality. To address this, we propose the first systematic framework integrating classical music theory into data sonification, enabling musically grounded data melodization. Our approach introduces a music-theoretically informed mapping mechanism grounded in tonality, formal structure, and counterpoint principles, and unifies melody generation, harmonic accompaniment, and rhythmic modeling to achieve organic alignment between data semantics and musical affect. Experimental evaluation demonstrates significant improvements in auditory comfort and perceptual discriminability, effectively mitigating harshness and ambiguity while preserving data fidelity. Crucially, the resulting sonifications exhibit enhanced aesthetic value and expressive power, advancing sonification from mere auditory display toward musically meaningful representation.
📝 Abstract
We propose a design space for data melodification, where standard visualization idioms and fundamental data characteristics map to rhetorical devices of music for a more affective experience of data. Traditional data sonification transforms data into sound by mapping it to different parameters such as pitch, volume, and duration. Often and regrettably, this mapping leaves behind melody, harmony, rhythm and other musical devices that compose the centuries-long persuasive and expressive power of music. What results is the occasional, unintentional sense of tinnitus and horror film-like impending doom caused by a disconnect between the semantics of data and sound. Through this work we ask, can the aestheticization of sonification through (classical) music theory make data simultaneously accessible, meaningful, and pleasing to one's ears?