🤖 AI Summary
This work addresses core challenges in automatic speech recognition (ASR) for Vedic Sanskrit poetry—including sandhi (phonological sandhi rules), archaic pronunciation variants, and complex metrical structures. To this end, we introduce the first domain-specific benchmark corpus: a meticulously annotated dataset comprising 54 hours of recitation audio (30,779 utterances) drawn from the *Ṛgveda* and *Atharvaveda*, uniquely integrating metrical annotations with acoustic prosodic features. Building upon this resource, we conduct end-to-end ASR benchmarking of multilingual speech models (e.g., IndicWhisper), systematically evaluating their performance on Vedic Sanskrit poetry recognition. Our experiments establish the first reproducible performance baseline—IndicWhisper achieves a word error rate (WER) of 28.4%—thereby filling critical gaps in foundational data and evaluation infrastructure for classical Indian languages. This work provides essential support for the digital preservation of Sanskrit texts and computational humanities research.
📝 Abstract
Sanskrit, an ancient language with a rich linguistic heritage, presents unique challenges for automatic speech recognition (ASR) due to its phonemic complexity and the phonetic transformations that occur at word junctures, similar to the connected speech found in natural conversations. Due to these complexities, there has been limited exploration of ASR in Sanskrit, particularly in the context of its poetic verses, which are characterized by intricate prosodic and rhythmic patterns. This gap in research raises the question: How can we develop an effective ASR system for Sanskrit, particularly one that captures the nuanced features of its poetic form? In this study, we introduce Vedavani, the first comprehensive ASR study focused on Sanskrit Vedic poetry. We present a 54-hour Sanskrit ASR dataset, consisting of 30,779 labelled audio samples from the Rig Veda and Atharva Veda. This dataset captures the precise prosodic and rhythmic features that define the language. We also benchmark the dataset on various state-of-the-art multilingual speech models.$^{1}$ Experimentation revealed that IndicWhisper performed the best among the SOTA models.