TRUST: A Decentralized Framework for Auditing Large Language Model Reasoning

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current chain-of-thought (CoT) auditing for large language models faces four critical challenges: poor robustness, limited scalability, insufficient transparency, and privacy leakage. To address these, this paper proposes the first decentralized CoT auditing framework. Our method introduces a consensus-driven distributed auditing architecture, employs a hierarchical directed acyclic graph (DAG) to decompose long reasoning chains, leverages blockchain-based immutable evidence logging, integrates privacy-preserving segmented sharing to minimize disclosure of reasoning traces, and designs an incentive-compatible consensus algorithm. Experimental results demonstrate that the framework efficiently detects reasoning errors across multiple models and tasks, supports parallel auditing, and maintains robustness even under 30% adversarial nodes. It thus achieves a balanced trade-off among security, efficiency, and verifiable trustworthiness.

Technology Category

Application Category

📝 Abstract
Large Language Models generate complex reasoning chains that reveal their decision-making, yet verifying the faithfulness and harmlessness of these intermediate steps remains a critical unsolved problem. Existing auditing methods are centralized, opaque, and hard to scale, creating significant risks for deploying proprietary models in high-stakes domains. We identify four core challenges: (1) Robustness: Centralized auditors are single points of failure, prone to bias or attacks. (2) Scalability: Reasoning traces are too long for manual verification. (3) Opacity: Closed auditing undermines public trust. (4) Privacy: Exposing full reasoning risks model theft or distillation. We propose TRUST, a transparent, decentralized auditing framework that overcomes these limitations via: (1) A consensus mechanism among diverse auditors, guaranteeing correctness under up to $30%$ malicious participants. (2) A hierarchical DAG decomposition of reasoning traces, enabling scalable, parallel auditing. (3) A blockchain ledger that records all verification decisions for public accountability. (4) Privacy-preserving segmentation, sharing only partial reasoning steps to protect proprietary logic. We provide theoretical guarantees for the security and economic incentives of the TRUST framework. Experiments across multiple LLMs (GPT-OSS, DeepSeek-r1, Qwen) and reasoning tasks (math, medical, science, humanities) show TRUST effectively detects reasoning flaws and remains robust against adversarial auditors. Our work pioneers decentralized AI auditing, offering a practical path toward safe and trustworthy LLM deployment.
Problem

Research questions and friction points this paper is trying to address.

Auditing faithfulness and harmlessness of LLM reasoning steps
Overcoming centralized opaque auditing methods' limitations
Addressing robustness scalability opacity and privacy challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized consensus mechanism for auditor verification
Hierarchical DAG decomposition for scalable reasoning analysis
Blockchain ledger for transparent audit accountability
🔎 Similar Papers
No similar papers found.