A Comprehensive Survey on Trustworthiness in Reasoning with Large Language Models

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Chain-of-thought (CoT) reasoning is widely adopted to enhance large language model (LLM) performance, yet its impact on model trustworthiness—encompassing factual consistency, safety, robustness, fairness, and privacy—remains inadequately understood. Method: We conduct a systematic literature review and cross-method comparative analysis of over 100 studies, synthesizing empirical findings to construct the first multidimensional framework for trustworthy CoT reasoning. Contribution/Results: Our analysis reveals that while CoT improves task accuracy, it concurrently exacerbates critical trust deficits—including heightened safety vulnerabilities, degraded robustness against adversarial perturbations, and increased privacy leakage risks—exposing fundamental fragilities in state-of-the-art reasoning models. We establish the first comprehensive conceptual taxonomy of trustworthy reasoning, accompanied by an open-source resource repository. This work provides both theoretical foundations and actionable guidelines for developing trustworthy AI reasoning systems.

Technology Category

Application Category

📝 Abstract
The development of Long-CoT reasoning has advanced LLM performance across various tasks, including language understanding, complex problem solving, and code generation. This paradigm enables models to generate intermediate reasoning steps, thereby improving both accuracy and interpretability. However, despite these advancements, a comprehensive understanding of how CoT-based reasoning affects the trustworthiness of language models remains underdeveloped. In this paper, we survey recent work on reasoning models and CoT techniques, focusing on five core dimensions of trustworthy reasoning: truthfulness, safety, robustness, fairness, and privacy. For each aspect, we provide a clear and structured overview of recent studies in chronological order, along with detailed analyses of their methodologies, findings, and limitations. Future research directions are also appended at the end for reference and discussion. Overall, while reasoning techniques hold promise for enhancing model trustworthiness through hallucination mitigation, harmful content detection, and robustness improvement, cutting-edge reasoning models themselves often suffer from comparable or even greater vulnerabilities in safety, robustness, and privacy. By synthesizing these insights, we hope this work serves as a valuable and timely resource for the AI safety community to stay informed on the latest progress in reasoning trustworthiness. A full list of related papers can be found at href{https://github.com/ybwang119/Awesome-reasoning-safety}{https://github.com/ybwang119/Awesome-reasoning-safety}.
Problem

Research questions and friction points this paper is trying to address.

Evaluating trustworthiness in Chain-of-Thought reasoning models
Assessing safety, robustness, and privacy vulnerabilities in LLMs
Analyzing truthfulness and fairness in reasoning techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surveying CoT reasoning techniques for trustworthiness
Analyzing five trust dimensions: truth, safety, robustness, fairness, privacy
Providing methodologies, findings, limitations, and future research directions
🔎 Similar Papers
Y
Yanbo Wang
School of Artificial Intelligence, University of Chinese Academy of Sciences
Yongcan Yu
Yongcan Yu
Master Student, CASIA
Trustworthy AISafety in AI
Jian Liang
Jian Liang
Kuaishou Inc.
transfer learninggraph learning
R
Ran He
NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences