๐ค AI Summary
This work addresses the critical gap in existing large language models (LLMs)โtheir lack of grounding in real-world regulatory frameworks, which undermines their safety and compliance capabilities. To bridge this gap, we present the first comprehensive, multi-domain safety and compliance dataset, systematically constructed to encompass 74 regulations, 12,985 structured rules, and 106,009 real-world cases spanning key domains including artificial intelligence, finance, healthcare, education, and human rights. Leveraging a web-search-based agent framework, we automatically collect and structure heterogeneous regulatory texts and their corresponding real-life instances from authoritative sources, ensuring strong alignment between rules and cases. Experimental validation confirms the datasetโs internal consistency, while large-scale benchmarking reveals fundamental limitations of current LLMs in regulatory reasoning and points toward actionable directions for improvement.
๐ Abstract
Ensuring the safety and compliance of large language models (LLMs) is of paramount importance. However, existing LLM safety datasets often rely on ad-hoc taxonomies for data generation and suffer from a significant shortage of rule-grounded, real-world cases that are essential for robustly protecting LLMs. In this work, we address this critical gap by constructing a comprehensive safety dataset from a compliance perspective. Using a powerful web-searching agent, we collect a rule-grounded, real-world case dataset OmniCompliance-100K, sourced from multi-domain authoritative references. The dataset spans 74 regulations and policies across a wide range of domains, including security and privacy regulations, content safety and user data privacy policies from leading AI companies and social media platforms, financial security requirements, medical device risk management standards, educational integrity guidelines, and protections of fundamental human rights. In total, our dataset contains 12,985 distinct rules and 106,009 associated real-world compliance cases. Our analysis confirms a strong alignment between the rules and their corresponding cases. We further conduct extensive benchmarking experiments to evaluate the safety and compliance capabilities of advanced LLMs across different model scales. Our experiments reveal several interesting findings that have great potential to offer valuable insights for future LLM safety research.