Explainable Statute Prediction via Attention-based Model and LLM Prompting

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the task of automated legal provision prediction: given a case description, precisely identifying relevant statutory provisions (at the clause level) and generating human-interpretable explanations. To jointly enhance interpretability and predictive accuracy, we propose a dual-path framework integrating a lightweight supervised attention model (AoS) with zero-shot chain-of-thought prompting for large language models (LLM-CoT). The framework enables traceable attribution-based explanations and automated counterfactual evaluation. On two major legal benchmark datasets, our method achieves state-of-the-art predictive performance over existing baselines. Human evaluation and counterfactual validation both confirm substantial improvements in explanation quality—demonstrating, for the first time, an organic integration of high-accuracy prediction with high-fidelity, verifiable explanations.

Technology Category

Application Category

📝 Abstract
In this paper, we explore the problem of automatic statute prediction where for a given case description, a subset of relevant statutes are to be predicted. Here, the term "statute" refers to a section, a sub-section, or an article of any specific Act. Addressing this problem would be useful in several applications such as AI-assistant for lawyers and legal question answering system. For better user acceptance of such Legal AI systems, we believe the predictions should also be accompanied by human understandable explanations. We propose two techniques for addressing this problem of statute prediction with explanations -- (i) AoS (Attention-over-Sentences) which uses attention over sentences in a case description to predict statutes relevant for it and (ii) LLMPrompt which prompts an LLM to predict as well as explain relevance of a certain statute. AoS uses smaller language models, specifically sentence transformers and is trained in a supervised manner whereas LLMPrompt uses larger language models in a zero-shot manner and explores both standard as well as Chain-of-Thought (CoT) prompting techniques. Both these models produce explanations for their predictions in human understandable forms. We compare statute prediction performance of both the proposed techniques with each other as well as with a set of competent baselines, across two popular datasets. Also, we evaluate the quality of the generated explanations through an automated counter-factual manner as well as through human evaluation.
Problem

Research questions and friction points this paper is trying to address.

Predict relevant statutes from case descriptions automatically.
Generate human-understandable explanations for statute predictions.
Compare attention-based and LLM prompting methods for legal AI.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-over-Sentences model for statute prediction
LLM prompting for zero-shot prediction and explanation
Generating human-understandable explanations for legal AI
🔎 Similar Papers
No similar papers found.