ImplicitBBQ: Benchmarking Implicit Bias in Large Language Models through Characteristic Based Cues

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks struggle to effectively detect implicit biases in large language models when identity information is expressed implicitly, particularly lacking reliable metrics along dimensions such as age and socioeconomic status. This work proposes the first question-answering benchmark that integrates culturally grounded attribute cues spanning age, gender, geography, religion, caste, and socioeconomic status, moving beyond the conventional reliance on name-based proxies. The benchmark constructs question-answer pairs using these attribute cues and systematically evaluates eleven open-source large language models through few-shot prompting, safety-aligned instructions, and chain-of-thought reasoning. Experiments reveal that implicit bias in ambiguous contexts can be over six times stronger than explicit bias; while few-shot prompting reduces bias by up to 84%, caste-related bias remains notably persistent.
📝 Abstract
Large Language Models increasingly suppress biased outputs when demographic identity is stated explicitly, yet may still exhibit implicit biases when identity is conveyed indirectly. Existing benchmarks use name based proxies to detect implicit biases, which carry weak associations with many social demographics and cannot extend to dimensions like age or socioeconomic status. We introduce ImplicitBBQ, a QA benchmark that evaluates implicit bias through characteristic based cues, culturally associated attributes that signal implicitly, across age, gender, region, religion, caste, and socioeconomic status. Evaluating 11 models, we find that implicit bias in ambiguous contexts is over six times higher than explicit bias in open weight models. Safety prompting and chain-of-thought reasoning fail to substantially close this gap; even few-shot prompting, which reduces implicit bias by 84%, leaves caste bias at four times the level of any other dimension. These findings indicate that current alignment and prompting strategies address the surface of bias evaluation while leaving culturally grounded stereotypic associations largely unresolved. We publicly release our code and dataset for model providers and researchers to benchmark potential mitigation techniques.
Problem

Research questions and friction points this paper is trying to address.

implicit bias
large language models
bias benchmarking
characteristic-based cues
social demographics
Innovation

Methods, ideas, or system contributions that make the work stand out.

implicit bias
characteristic-based cues
large language models
bias benchmarking
cultural stereotypes
🔎 Similar Papers
Bhaskara Hanuma Vedula
Bhaskara Hanuma Vedula
PhD Scholar, IIIT-Hyderabad
Responsible AI | NLP
D
Darshan Anghan
Indian Institute of Technology, Kharagpur
I
Ishita Goyal
Indian Institute of Technology, Kharagpur
P
Ponnurangam Kumaraguru
International Institute of Information Technology, Hyderabad
Abhijnan Chakraborty
Abhijnan Chakraborty
Assistant Professor, Computer Science & Engg., IIT Kharagpur
Responsible AISocial ComputingInformation RetrievalAI for Social Good