Are Smarter LLMs Safer? Exploring Safety-Reasoning Trade-offs in Prompting and Fine-Tuning

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the implicit trade-off between reasoning capability enhancement and safety degradation in large language models (LLMs). While prompt engineering and fine-tuning improve reasoning, they may simultaneously exacerbate risks such as jailbreaking and hallucination amplification—yet also strengthen safety defenses. To address this duality, we propose a “safety–reasoning coupling” analytical framework, the first to systematically disentangle risk origins from defense pathways. Methodologically, we integrate controllable prompt design, supervised and reinforcement fine-tuning, adversarial safety evaluation, attribution analysis, and behavioral trajectory modeling. Extensive multi-benchmark experiments demonstrate that reasoning enhancement both intensifies certain safety vulnerabilities and significantly improves refusal rates for harmful requests—and enhances decision interpretability—via structured reasoning chains. Our core contribution lies in uncovering and formalizing the dynamic coupling mechanism between reasoning and safety, thereby providing theoretical foundations and actionable guidelines for developing LLMs that jointly optimize capability and robustness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable success across various NLP benchmarks. However, excelling in complex tasks that require nuanced reasoning and precise decision-making demands more than raw language proficiency--LLMs must reason, i.e., think logically, draw from past experiences, and synthesize information to reach conclusions and take action. To enhance reasoning abilities, approaches such as prompting and fine-tuning have been widely explored. While these methods have led to clear improvements in reasoning, their impact on LLM safety remains less understood. In this work, we investigate the interplay between reasoning and safety in LLMs. We highlight the latent safety risks that arise as reasoning capabilities improve, shedding light on previously overlooked vulnerabilities. At the same time, we explore how reasoning itself can be leveraged to enhance safety, uncovering potential mitigation strategies. By examining both the risks and opportunities in reasoning-driven LLM safety, our study provides valuable insights for developing models that are not only more capable but also more trustworthy in real-world deployments.
Problem

Research questions and friction points this paper is trying to address.

Explores safety-reasoning trade-offs in LLMs
Investigates reasoning-safety interplay in language models
Uncovers safety risks and mitigation strategies in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhancing LLM reasoning via prompting
Fine-tuning for safety-reasoning trade-offs
Leveraging reasoning to mitigate safety risks