Context Reasoner: Incentivizing Reasoning Capability for Contextualized Privacy and Safety Compliance via Reinforcement Learning

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit weak contextual reasoning in security and privacy scenarios, over-rely on superficial sensitive-term matching, and struggle to satisfy stringent regulatory requirements such as GDPR, the EU AI Act, and HIPAA. Method: This work introduces Contextual Integrity (CI) theory—systematically and for the first time—into LLM safety alignment, proposing a contextualized compliance modeling framework. It employs a rule-driven Proximal Policy Optimization (PPO) mechanism, integrating multi-regulatory reward shaping with context-aware policy fine-tuning to jointly optimize regulatory compliance and reasoning capability. Results: Experiments demonstrate a 17.64% accuracy improvement on security/privacy benchmarks; OpenThinker-7B achieves gains of 2.05% on MMLU and 8.98% on LegalBench, confirming that the approach significantly enhances general reasoning while rigorously ensuring regulatory adherence.

Technology Category

Application Category

📝 Abstract
While Large Language Models (LLMs) exhibit remarkable capabilities, they also introduce significant safety and privacy risks. Current mitigation strategies often fail to preserve contextual reasoning capabilities in risky scenarios. Instead, they rely heavily on sensitive pattern matching to protect LLMs, which limits the scope. Furthermore, they overlook established safety and privacy standards, leading to systemic risks for legal compliance. To address these gaps, we formulate safety and privacy issues into contextualized compliance problems following the Contextual Integrity (CI) theory. Under the CI framework, we align our model with three critical regulatory standards: GDPR, EU AI Act, and HIPAA. Specifically, we employ reinforcement learning (RL) with a rule-based reward to incentivize contextual reasoning capabilities while enhancing compliance with safety and privacy norms. Through extensive experiments, we demonstrate that our method not only significantly enhances legal compliance (achieving a +17.64% accuracy improvement in safety/privacy benchmarks) but also further improves general reasoning capability. For OpenThinker-7B, a strong reasoning model that significantly outperforms its base model Qwen2.5-7B-Instruct across diverse subjects, our method enhances its general reasoning capabilities, with +2.05% and +8.98% accuracy improvement on the MMLU and LegalBench benchmark, respectively.
Problem

Research questions and friction points this paper is trying to address.

Addressing safety and privacy risks in LLMs while preserving contextual reasoning
Aligning model compliance with GDPR, EU AI Act, and HIPAA standards
Enhancing legal compliance and reasoning via reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for contextualized compliance
Aligns with GDPR, EU AI Act, HIPAA
Improves reasoning and legal compliance
🔎 Similar Papers
No similar papers found.