QU-NLP at QIAS 2025 Shared Task: A Two-Phase LLM Fine-Tuning and Retrieval-Augmented Generation Approach for Islamic Inheritance Reasoning

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of scene understanding, heir identification, application of fixed-share inheritance rules, and precise computation in Islamic inheritance law reasoning. We propose a two-stage framework integrating fine-tuning and retrieval-augmented generation (RAG). Leveraging the Arabic medium-scale model Fanar-1-9B, we combine low-rank adaptation (LoRA) fine-tuning with domain-knowledge-anchored RAG to enhance rule comprehension and multi-step logical reasoning. On a standard benchmark, our approach achieves an overall accuracy of 85.8%, outperforming zero-shot large language models including GPT-4.5, LLaMA, and Mistral; for advanced reasoning subtasks, it attains 97.6% accuracy—surpassing Gemini 2.5 and OpenAI o3. To our knowledge, this is the first work to deeply integrate structured domain-specific fine-tuning with retrieval-anchored RAG, delivering a lightweight, reproducible, and high-accuracy solution for religious legal reasoning.

Technology Category

Application Category

📝 Abstract
This paper presents our approach and results for SubTask 1: Islamic Inheritance Reasoning at QIAS 2025, a shared task focused on evaluating Large Language Models (LLMs) in understanding and reasoning within Islamic inheritance knowledge. We fine-tuned the Fanar-1-9B causal language model using Low-Rank Adaptation (LoRA) and integrated it into a Retrieval-Augmented Generation (RAG) pipeline. Our system addresses the complexities of Islamic inheritance law, including comprehending inheritance scenarios, identifying eligible heirs, applying fixed-share rules, and performing precise calculations. Our system achieved an accuracy of 0.858 in the final test, outperforming other competitive models such as, GPT 4.5, LLaMA, Fanar, Mistral and ALLaM evaluated with zero-shot prompting. Our results demonstrate that QU-NLP achieves near state-of-the-art accuracy (85.8%), excelling especially on advanced reasoning (97.6%) where it outperforms Gemini 2.5 and OpenAI's o3. This highlights that domain-specific fine-tuning combined with retrieval grounding enables mid-scale Arabic LLMs to surpass frontier models in Islamic inheritance reasoning.
Problem

Research questions and friction points this paper is trying to address.

Addresses Islamic inheritance law complexities and reasoning
Identifies eligible heirs and applies fixed-share rules
Performs precise inheritance calculations using specialized methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned Fanar-1-9B model with LoRA
Integrated Retrieval-Augmented Generation pipeline
Applied domain-specific Arabic LLM fine-tuning
🔎 Similar Papers
No similar papers found.