🤖 AI Summary
This study addresses the critical scarcity of high-quality supervised data for low-resource languages like Amharic, which severely hinders the development of neural retrieval and instruction-tuned models. To bridge this gap, the authors present and release two standardized, human-validated Amharic datasets: a neural retrieval dataset comprising 1,091 query–positive–negative triplets and an instruction-tuning dataset containing 6,285 prompt–response pairs. The data were constructed through a hybrid approach combining expert authoring, web mining, and large language model generation, followed by rigorous validation by native speakers. These resources are compatible with mainstream retrieval paradigms such as DPR, ColBERT, and SPLADE. Beyond filling a significant void in Amharic information retrieval and text generation, this work also introduces a scalable methodology for building similar datasets in other low-resource languages.
📝 Abstract
Neural retrieval and GPT-style generative models rely on large, high-quality supervised data, which is still scarce for low-resource languages such as Amharic. We release an Amharic data resource consisting of two datasets that supports research on (i) neural retrieval-ranking and (ii) instruction-following text generation. The retrieval-ranking dataset contains 1,091 manually verified query-positive-negative document triplets drawn from diverse Amharic sources and constructed to support contrastive training and benchmarking of neural retrievers (e.g., DPR, ColBERT-style late interaction and SPLADE-style sparse neural retrieval). Triplets are created through a combination of expert-curated queries, web-derived queries, and LLM-assisted generation, with positive/negative documents selected from the web or synthesized by LLMs and then validated by native speakers. The instruction prompt-response dataset comprises 6,285 Amharic prompt-response pairs spanning multiple domains and instruction types, generated with several LLMs and refined through manual review and correction for grammaticality, relevance, fluency, and factual plausibility. We release both datasets with standardized splits and formats (CSV,JSON,JSONL) to enable reproducible work on Amharic retrieval, ranking, and generative modelling. These datasets also come with a methodology that can be generalized to other low-resource languages.