🤖 AI Summary
This work addresses key limitations of existing large language model prompting methods—such as Chain-of-Thought—including high token overhead, poor cross-task generalization, and susceptibility to reasoning biases. To overcome these challenges, the authors propose the Adaptive Causal Prompting Framework (ACPS), which uniquely integrates structural causal models with a lightweight Sketch-of-Thought prompting mechanism. ACPS adaptively selects between front-door or conditional front-door intervention strategies to enable efficient, debiased reasoning without requiring task-specific retraining. Extensive experiments across multiple reasoning benchmarks and mainstream large language models demonstrate that ACPS consistently outperforms current prompting approaches in terms of accuracy, robustness, and computational efficiency.
📝 Abstract
Despite notable advancements in prompting methods for Large Language Models (LLMs), such as Chain-of-Thought (CoT), existing strategies still suffer from excessive token usage and limited generalisability across diverse reasoning tasks. To address these limitations, we propose an Adaptive Causal Prompting with Sketch-of-Thought (ACPS) framework, which leverages structural causal models to infer the causal effect of a query on its answer and adaptively select an appropriate intervention (i.e., standard front-door and conditional front-door adjustments). This design enables generalisable causal reasoning across heterogeneous tasks without task-specific retraining. By replacing verbose CoT with concise Sketch-of-Thought, ACPS enables efficient reasoning that significantly reduces token usage and inference cost. Extensive experiments on multiple reasoning benchmarks and LLMs demonstrate that ACPS consistently outperforms existing prompting baselines in terms of accuracy, robustness, and computational efficiency.