🤖 AI Summary
Current AI systems heavily rely on text-based representations, limiting their applicability to approximately 700 million people in rural and remote regions who primarily use spoken—often unwritten or low-resource—languages. To address this, we propose the first end-to-end audio-to-audio machine intelligence framework that bypasses textual intermediaries entirely, directly modeling semantic and expressive content from raw speech. Our method introduces (1) Multi-scale Audio Semantic Tokenization (MAST), enabling deep cross-lingual semantic disentanglement; and (2) a mean-field fractional diffusion generative paradigm grounded in fractional Brownian motion, supporting high-fidelity, semantically consistent speech synthesis and translation without text supervision. The framework is agnostic to audio representations—including spectrograms, wavelets, scalograms, and discrete units—thereby significantly enhancing generalizability, robustness, and scalability for under-digitized languages.
📝 Abstract
While global linguistic diversity spans more than 7164 recognized languages, the current dominant architecture of machine intelligence remains fundamentally biased toward written text. This bias excludes over 700 million people particularly in rural and remote regions who are audio-literate. In this work, we introduce a fully textless, audio-to-audio machine intelligence framework designed to serve this underserved population, and all the people who prefer audio-efficiency. Our contributions include novel Audio-to-Audio translation architectures that bypass text entirely, including spectrogram-, scalogram-, wavelet-, and unit-based models. Central to our approach is the Multiscale Audio-Semantic Transform (MAST), a representation that encodes tonal, prosodic, speaker, and expressive features. We further integrate MAST into a fractional diffusion of mean-field-type framework powered by fractional Brownian motion. It enables the generation of high-fidelity, semantically consistent speech without reliance on textual supervision. The result is a robust and scalable system capable of learning directly from raw audio, even in languages that are unwritten or rarely digitized. This work represents a fundamental shift toward audio-native machine intelligence systems, expanding access to language technologies for communities historically left out of the current machine intelligence ecosystem.