InterveneBench: Benchmarking LLMs for Intervention Reasoning and Causal Study Design in Real Social Systems

πŸ“… 2026-03-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the absence of evaluation benchmarks for assessing large language models’ capacity to perform intervention reasoning and causal research design in real-world social systems. We introduce InterveneBench, the first end-to-end benchmark constructed from 744 empirical social science papers, which challenges models to infer policy intervention effects and articulate identification assumptions without access to predefined causal graphs. To enhance model performance on this task, we propose STRIDES, a multi-agent collaborative framework that substantially improves causal study design capabilities. Experimental results demonstrate that state-of-the-art large language models exhibit limited proficiency on InterveneBench, whereas STRIDES significantly outperforms existing approaches.

Technology Category

Application Category

πŸ“ Abstract
Causal inference in social science relies on end-to-end, intervention-centered research-design reasoning grounded in real-world policy interventions, but current benchmarks fail to evaluate this capability of large language models (LLMs). We present InterveneBench, a benchmark designed to assess such reasoning in realistic social settings. Each instance in InterveneBench is derived from an empirical social science study and requires models to reason about policy interventions and identification assumptions without access to predefined causal graphs or structural equations. InterveneBench comprises 744 peer-reviewed studies across diverse policy domains. Experimental results show that state-of-the-art LLMs struggle under this setting. To address this limitation, we further propose a multi-agent framework, STRIDES. It achieves significant performance improvements over state-of-the-art reasoning models. Our code and data are available at https://github.com/Sii-yuning/STRIDES.
Problem

Research questions and friction points this paper is trying to address.

intervention reasoning
causal inference
large language models
social science
benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intervention Reasoning
Causal Inference
Large Language Models
Benchmark
Multi-agent Framework
πŸ”Ž Similar Papers
No similar papers found.
S
Shaojie Shi
Fudan University
Z
Zhengyu Shi
Fudan University
L
Lingran Zheng
Fudan University
Xinyu Su
Xinyu Su
The University of Melbourne
Spatial-temporal Data Mangement
A
Anna Xie
Fudan University
B
Bohao Lv
Fudan University
Rui Xu
Rui Xu
Fudan University & Shanghai Innovation Institute
Large Language Model
Zijian Chen
Zijian Chen
Shanghai Jiao Tong University | Shanghai AI Laboratory
Image/Video Quality AssessmentLarge Multi-modal Models
Z
Zhichao Chen
Fudan University
G
Guolei Liu
Fudan University
N
Naifu Zhang
Fudan University
M
Mingjian Dong
Fudan University
Z
Zhuo Quan
Fudan University
B
Bohao Chen
Fudan University
T
Teqi Hao
Fudan University
Y
Yuan Qi
Fudan University
Yinghui Xu
Yinghui Xu
Research Scientist/Senior Director
machine learningmachine visionoptimization
L
Libo Wu
Fudan University