🤖 AI Summary
Existing LLM-based causal discovery evaluations suffer from data leakage, as benchmarks often overlap with pretraining corpora, inflating performance estimates. Method: We propose a memory-free evaluation paradigm: constructing a causal graph benchmark exclusively from cutting-edge scientific literature published after the LLM’s training cutoff date, ensuring assessment fidelity. Our approach integrates an LLM–statistical hybrid framework—using LLM-generated causal hypotheses as informative priors for classical constraint-based algorithms (e.g., PC), enabling synergistic causal reasoning. Results: On this leakage-free benchmark, standalone LLM methods exhibit substantial performance degradation, whereas the hybrid framework achieves significantly higher accuracy, outperforming all individual baselines. Key contributions are: (1) the first causal discovery benchmark explicitly designed to prevent data leakage; (2) a verifiable, interpretable LLM–statistical fusion paradigm; and (3) empirical validation of LLMs’ unique utility as reliable causal prior generators, advancing their deployment in trustworthy scientific discovery.
📝 Abstract
Recent claims of strong performance by Large Language Models (LLMs) on causal discovery are undermined by a key flaw: many evaluations rely on benchmarks likely included in pretraining corpora. Thus, apparent success suggests that LLM-only methods, which ignore observational data, outperform classical statistical approaches. We challenge this narrative by asking: Do LLMs truly reason about causal structure, and how can we measure it without memorization concerns? Can they be trusted for real-world scientific discovery? We argue that realizing LLMs' potential for causal analysis requires two shifts: (P.1) developing robust evaluation protocols based on recent scientific studies to guard against dataset leakage, and (P.2) designing hybrid methods that combine LLM-derived knowledge with data-driven statistics. To address P.1, we encourage evaluating discovery methods on novel, real-world scientific studies. We outline a practical recipe for extracting causal graphs from recent publications released after an LLM's training cutoff, ensuring relevance and preventing memorization while capturing both established and novel relations. Compared to benchmarks like BNLearn, where LLMs achieve near-perfect accuracy, they perform far worse on our curated graphs, underscoring the need for statistical grounding. Supporting P.2, we show that using LLM predictions as priors for the classical PC algorithm significantly improves accuracy over both LLM-only and purely statistical methods. We call on the community to adopt science-grounded, leakage-resistant benchmarks and invest in hybrid causal discovery methods suited to real-world inquiry.