🤖 AI Summary
Existing Indian LLM evaluation benchmarks focus narrowly on factual question-answering and lack systematic assessment of graduate-level, culturally embedded disciplinary understanding. Method: We introduce ParamBench—the first multidisciplinary, high-fidelity evaluation benchmark tailored to the Indian context—comprising 11.5K Hindi questions across 16 disciplines (e.g., history, law, classical music, archaeology), featuring complex item types such as causal reasoning, sequencing, and matching, all drawn from authentic postgraduate entrance examinations. Contribution/Results: ParamBench enables fine-grained, discipline-specific performance analysis. Evaluating 17+ open-source models reveals that Llama 3.3 70B achieves the highest overall accuracy (48%), yet exhibits significant deficits in culture-intensive domains—including classical instrumentation, political philosophy, and archaeology—exposing a fundamental limitation of current LLMs in deep cultural reasoning and contextualized domain expertise.
📝 Abstract
Large language models (LLMs) have been widely evaluated on tasks such as comprehension, question answering, summarization, code generation, etc. However, their performance on graduate-level, culturally grounded questions in the Indian context remains largely unexplored. Existing Indian benchmarks emphasise basic fact-orientated queries that offer limited assessment of a deeper disciplinary understanding tailored to the Indian setting. In this paper, we present ParamBench, consisting of around 11.5K questions in Hindi language comprising questionnaires from 16 diverse subjects. These questions are primarily derived from nation-wide graduate level entrance examination covering topics such as history, music, instruments, yoga, literature, philosophy, law, etc., specifically for the Indian context. Additionally, we assess the ability of LLMs to handle diverse question formats-such as list-based matching, assertion-reason pairs, and sequence ordering-alongside conventional multiple-choice questions. We evaluated the performance of more than 17 open source LLMs on this benchmark, observing that Llama 3.3 70B attains the highest overall accuracy of 48%. Furthermore, subject-wise analysis indicates that even for the best performing LLMs, performance remains weak on topics such as music, classical instruments, politics and archaeology, underscoring persistent challenges in culturally grounded reasoning.