🤖 AI Summary
To address the lack of high-quality sentence textual similarity (STS) datasets and dedicated models for Marathi—a low-resource Indian language—this work introduces MahaSTS, the first Marathi STS benchmark. It comprises 16,860 human-annotated sentence pairs with continuous similarity scores ranging from 0 to 5, employing balanced score-bin distribution to mitigate label bias. Building upon the Sentence-BERT architecture, we propose MahaSBERT-STS-v2, a fine-tuned regression model optimized via mean squared error loss. Experimental results demonstrate that MahaSBERT-STS-v2 significantly outperforms multilingual baselines—including MahaBERT, MuRIL, and IndicBERT—on the MahaSTS test set. This work fills a critical gap in STS research for low-resource Indian languages. Both the MahaSTS dataset and the MahaSBERT-STS-v2 model are publicly released to foster further advancement in Marathi NLP.
📝 Abstract
We present MahaSTS, a human-annotated Sentence Textual Similarity (STS) dataset for Marathi, along with MahaSBERT-STS-v2, a fine-tuned Sentence-BERT model optimized for regression-based similarity scoring. The MahaSTS dataset consists of 16,860 Marathi sentence pairs labeled with continuous similarity scores in the range of 0-5. To ensure balanced supervision, the dataset is uniformly distributed across six score-based buckets spanning the full 0-5 range, thus reducing label bias and enhancing model stability. We fine-tune the MahaSBERT model on this dataset and benchmark its performance against other alternatives like MahaBERT, MuRIL, IndicBERT, and IndicSBERT. Our experiments demonstrate that MahaSTS enables effective training for sentence similarity tasks in Marathi, highlighting the impact of human-curated annotations, targeted fine-tuning, and structured supervision in low-resource settings. The dataset and model are publicly shared at https://github.com/l3cube-pune/MarathiNLP