L3Cube-MahaSTS: A Marathi Sentence Similarity Dataset and Models

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of high-quality sentence textual similarity (STS) datasets and dedicated models for Marathi—a low-resource Indian language—this work introduces MahaSTS, the first Marathi STS benchmark. It comprises 16,860 human-annotated sentence pairs with continuous similarity scores ranging from 0 to 5, employing balanced score-bin distribution to mitigate label bias. Building upon the Sentence-BERT architecture, we propose MahaSBERT-STS-v2, a fine-tuned regression model optimized via mean squared error loss. Experimental results demonstrate that MahaSBERT-STS-v2 significantly outperforms multilingual baselines—including MahaBERT, MuRIL, and IndicBERT—on the MahaSTS test set. This work fills a critical gap in STS research for low-resource Indian languages. Both the MahaSTS dataset and the MahaSBERT-STS-v2 model are publicly released to foster further advancement in Marathi NLP.

Technology Category

Application Category

📝 Abstract
We present MahaSTS, a human-annotated Sentence Textual Similarity (STS) dataset for Marathi, along with MahaSBERT-STS-v2, a fine-tuned Sentence-BERT model optimized for regression-based similarity scoring. The MahaSTS dataset consists of 16,860 Marathi sentence pairs labeled with continuous similarity scores in the range of 0-5. To ensure balanced supervision, the dataset is uniformly distributed across six score-based buckets spanning the full 0-5 range, thus reducing label bias and enhancing model stability. We fine-tune the MahaSBERT model on this dataset and benchmark its performance against other alternatives like MahaBERT, MuRIL, IndicBERT, and IndicSBERT. Our experiments demonstrate that MahaSTS enables effective training for sentence similarity tasks in Marathi, highlighting the impact of human-curated annotations, targeted fine-tuning, and structured supervision in low-resource settings. The dataset and model are publicly shared at https://github.com/l3cube-pune/MarathiNLP
Problem

Research questions and friction points this paper is trying to address.

Creating annotated Marathi sentence similarity dataset
Developing fine-tuned BERT model for Marathi
Addressing low-resource language NLP challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-annotated Marathi STS dataset
Fine-tuned Sentence-BERT regression model
Uniform score distribution reduces bias
🔎 Similar Papers
No similar papers found.
A
Aishwarya Mirashi
Pune Institute of Computer Technology, Pune
A
Ananya Joshi
MKSSS’ Cummins College of Engineering for Women, Pune
Raviraj Joshi
Raviraj Joshi
Indian Institute of Technology Madras
computer sciencemachine learningnatural language processing