🤖 AI Summary
This study addresses two critical limitations in AI safety evaluation: incomplete benchmark coverage and ambiguous semantic overlap across safety benchmarks. We propose a novel “semantic orthogonality” quantification framework—the first systematic analysis of coverage disparities and structural shifts across five open-source safety benchmarks along six core harm dimensions. Leveraging UMAP dimensionality reduction, K-means clustering (silhouette score: 0.470), and multi-benchmark semantic contrast modeling, we identify pronounced domain preferences (e.g., GretelAI favors privacy harms; WildGuardMix emphasizes self-harm) and data biases (e.g., imbalanced prompt-length distributions). Results reveal severe class imbalance across the six harm categories and high inter-benchmark semantic orthogonality—indicating that superficially similar benchmarks exhibit substantial coverage gaps. Our work delivers a reproducible, interpretable diagnostic tool for assessing benchmark coverage in AI safety evaluation, establishing a methodological foundation for developing more comprehensive, transparent, and targeted safety evaluation datasets.
📝 Abstract
Various AI safety datasets have been developed to measure LLMs against evolving interpretations of harm. Our evaluation of five recently published open-source safety benchmarks reveals distinct semantic clusters using UMAP dimensionality reduction and kmeans clustering (silhouette score: 0.470). We identify six primary harm categories with varying benchmark representation. GretelAI, for example, focuses heavily on privacy concerns, while WildGuardMix emphasizes self-harm scenarios. Significant differences in prompt length distribution suggests confounds to data collection and interpretations of harm as well as offer possible context. Our analysis quantifies benchmark orthogonality among AI benchmarks, allowing for transparency in coverage gaps despite topical similarities. Our quantitative framework for analyzing semantic orthogonality across safety benchmarks enables more targeted development of datasets that comprehensively address the evolving landscape of harms in AI use, however that is defined in the future.