Lost in Transcription: How Speech-to-Text Errors Derail Code Understanding

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant degradation in code comprehension performance caused by transcription errors in multilingual voice input, particularly in non-standard English, code-switching, and custom identifier scenarios. We propose the first speech-driven code understanding framework tailored for multilingual users, which leverages large language models (LLMs) to perform code-aware post-processing of automatic speech recognition (ASR) outputs and subsequently integrates with code language models to support question answering and retrieval tasks. We systematically evaluate our approach on four major Indian languages—Hindi, Bengali, Tamil, and Telugu—alongside English, demonstrating across benchmarks such as CodeSearchNet, CoRNStack, and CodeQA that LLM-guided transcription refinement substantially improves both transcription accuracy and downstream task performance. Our findings establish, for the first time, the necessity and feasibility of building code-sensitive speech interfaces.

Technology Category

Application Category

📝 Abstract
Code understanding is a foundational capability in software engineering tools and developer workflows. However, most existing systems are designed for English-speaking users interacting via keyboards, which limits accessibility in multilingual and voice-first settings, particularly in regions like India. Voice-based interfaces offer a more inclusive modality, but spoken queries involving code present unique challenges due to the presence of non-standard English usage, domain-specific vocabulary, and custom identifiers such as variable and function names, often combined with code-mixed expressions. In this work, we develop a multilingual speech-driven framework for code understanding that accepts spoken queries in a user native language, transcribes them using Automatic Speech Recognition (ASR), applies code-aware ASR output refinement using Large Language Models (LLMs), and interfaces with code models to perform tasks such as code question answering and code retrieval through benchmarks such as CodeSearchNet, CoRNStack, and CodeQA. Focusing on four widely spoken Indic languages and English, we systematically characterize how transcription errors impact downstream task performance. We also identified key failure modes in ASR for code and demonstrated that LLM-guided refinement significantly improves performance across both transcription and code understanding stages. Our findings underscore the need for code-sensitive adaptations in speech interfaces and offer a practical solution for building robust, multilingual voice-driven programming tools.
Problem

Research questions and friction points this paper is trying to address.

speech-to-text errors
code understanding
multilingual
Automatic Speech Recognition
code-mixed expressions
Innovation

Methods, ideas, or system contributions that make the work stand out.

speech-driven code understanding
code-aware ASR refinement
multilingual programming interfaces
LLM-guided transcription
code-mixed speech recognition
🔎 Similar Papers
No similar papers found.