Toward Systematic Counterfactual Fairness Evaluation of Large Language Models: The CAFFE Framework

πŸ“… 2025-12-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the counterfactual fairness challenges faced by large language models (LLMs) in real-world deployment, this paper proposes CAFFEβ€”the first intent-driven, systematic framework for counterfactual fairness evaluation. CAFFE explicitly models prompt intent, contextual constraints, input perturbations, and configurable fairness thresholds, thereby overcoming limitations of conventional metamorphic testing and enabling broader bias coverage and more reliable identification of unfair behaviors. Methodologically, it integrates non-functional testing principles, automated test case generation, semantic similarity measurement (via BERTScore), and multi-dimensional fairness configuration modeling. Experimental evaluation across three mainstream LLM architectures demonstrates that CAFFE significantly improves bias detection coverage compared to state-of-the-art approaches, while reducing false positives and enhancing robustness under diverse perturbations.

Technology Category

Application Category

πŸ“ Abstract
Nowadays, Large Language Models (LLMs) are foundational components of modern software systems. As their influence grows, concerns about fairness have become increasingly pressing. Prior work has proposed metamorphic testing to detect fairness issues, applying input transformations to uncover inconsistencies in model behavior. This paper introduces an alternative perspective for testing counterfactual fairness in LLMs, proposing a structured and intent-aware framework coined CAFFE (Counterfactual Assessment Framework for Fairness Evaluation). Inspired by traditional non-functional testing, CAFFE (1) formalizes LLM-Fairness test cases through explicitly defined components, including prompt intent, conversational context, input variants, expected fairness thresholds, and test environment configuration, (2) assists testers by automatically generating targeted test data, and (3) evaluates model responses using semantic similarity metrics. Our experiments, conducted on three different architectural families of LLM, demonstrate that CAFFE achieves broader bias coverage and more reliable detection of unfair behavior than existing metamorphic approaches.
Problem

Research questions and friction points this paper is trying to address.

Evaluating counterfactual fairness in Large Language Models
Detecting unfair behavior through structured testing framework
Generating targeted test data for bias coverage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured fairness test case formalization
Automated targeted test data generation
Semantic similarity-based response evaluation
πŸ”Ž Similar Papers
No similar papers found.