PerCoR: Evaluating Commonsense Reasoning in Persian via Multiple-Choice Sentence Completion

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of Persian commonsense reasoning evaluation resources, this paper introduces PerCoR—the first large-scale Persian commonsense reasoning benchmark, comprising 106,000 domain-diverse multiple-choice cloze questions. Methodologically, we propose a conjunction-based sentence-pair segmentation strategy to preserve contextual coherence and design DRESS-AF, a generation-free adversarial filtering approach that integrates embedding similarity scoring with human verification to automatically select highly confusable distractors—enhancing difficulty while enabling cross-lingual transferability. Human performance reaches 89.0%, OpenAI-o3 achieves 92.18%, and the strongest open-source model, DeepSeek-R1, scores 82.51%, confirming the dataset’s rigor and evaluation validity. PerCoR fills a critical gap in Persian-language commonsense reasoning benchmarks and establishes essential infrastructure for commonsense reasoning research in low-resource languages.

Technology Category

Application Category

📝 Abstract
We introduced PerCoR (Persian Commonsense Reasoning), the first large-scale Persian benchmark for commonsense reasoning. PerCoR contains 106K multiple-choice sentence-completion problems drawn from more than forty news, cultural, and other web sources. We introduce a novel conjunction-based segmentation strategy to generate coherent sentence-completion pairs, enabling broad topical and structural diversity. To create challenging distractors, we propose DRESS-AF (Distractor Ranking via Embedding Similarity Scoring and Adversarial Filtering), a generation-free adversarial filtering method that selects distractors from the pool of gold continuations while maximising model confusion. Human annotators score 89% on PerCoR, while OpenAI-o3 achieves the highest performance at 92.18%, followed closely by Claude-Sonnet-3.7 (91.17%). The strongest open-source model, DeepSeek-R1, reaches 82.51%, underscoring both the dataset's difficulty and the remaining performance gap in Persian commonsense reasoning. We further show that DRESS-AF transfers to the English HellaSwag benchmark, increasing its difficulty without hurting human solvability. The dataset is available at https://huggingface.co/datasets/MCINext/PerCoR.
Problem

Research questions and friction points this paper is trying to address.

Creating first Persian benchmark for commonsense reasoning
Developing adversarial filtering method for challenging distractors
Evaluating performance gap in Persian language AI models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conjunction-based segmentation strategy for sentence pairs
Generation-free adversarial filtering for distractor selection
Embedding similarity scoring to maximize model confusion
🔎 Similar Papers
No similar papers found.