Causal Order: The Key to Leveraging Imperfect Experts in Causal Inference

📅 2023-10-23
📈 Citations: 43
Influential: 3
📄 PDF
🤖 AI Summary
Large language models (LLMs) and other imperfect experts often conflate direct and indirect causal effects and induce spurious cyclic dependencies when inferring causal graphs. Method: We propose using **causal order**—a more robust knowledge interface—instead of causal graphs, and design a **cycle-avoiding triplet prompting strategy**, integrating multi-round auxiliary-variable queries with a voting-based ensemble under constrained modeling to enhance ordinal consistency. Contribution/Results: We provide the first theoretical analysis and empirical validation demonstrating that causal order exhibits superior stability and noise robustness compared to causal graphs. Experiments show that lightweight models (e.g., Phi-3), when equipped with our framework, achieve **higher causal-order accuracy than GPT-4** across multiple real-world benchmarks, significantly reducing cyclic errors and improving robustness and precision in downstream causal discovery and effect estimation.
📝 Abstract
Large Language Models (LLMs) have been used as experts to infer causal graphs, often by repeatedly applying a pairwise prompt that asks about the causal relationship of each variable pair. However, such experts, including human domain experts, cannot distinguish between direct and indirect effects given a pairwise prompt. Therefore, instead of the graph, we propose that causal order be used as a more stable output interface for utilizing expert knowledge. Even when querying a perfect expert with a pairwise prompt, we show that the inferred graph can have significant errors whereas the causal order is always correct. In practice, however, LLMs are imperfect experts and we find that pairwise prompts lead to multiple cycles. Hence, we propose the triplet method, a novel querying strategy that introduces an auxiliary variable for every variable pair and instructs the LLM to avoid cycles within this triplet. It then uses a voting-based ensemble method that results in higher accuracy and fewer cycles while ensuring cost efficiency. Across multiple real-world graphs, such a triplet-based method yields a more accurate order than the pairwise prompt, using both LLMs and human annotators. The triplet method enhances robustness by repeatedly querying an expert with different auxiliary variables, enabling smaller models like Phi-3 and Llama-3 8B Instruct to surpass GPT-4 with pairwise prompting. For practical usage, we show how the expert-provided causal order from the triplet method can be used to reduce error in downstream graph discovery and effect inference tasks.
Problem

Research questions and friction points this paper is trying to address.

Distinguishing direct vs indirect effects in causal inference
Improving accuracy of causal order from imperfect experts
Reducing cycles in causal graphs using triplet queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses causal order as stable expert output interface
Introduces triplet method to avoid cycle errors
Employs voting ensemble for higher accuracy efficiency
🔎 Similar Papers
No similar papers found.