Surfacing Semantic Orthogonality Across Model Safety Benchmarks: A Multidimensional Analysis

📅 2025-05-20
🏛️ Advanced Natural Language Processing 2025
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses two critical limitations in AI safety evaluation: incomplete benchmark coverage and ambiguous semantic overlap across safety benchmarks. We propose a novel “semantic orthogonality” quantification framework—the first systematic analysis of coverage disparities and structural shifts across five open-source safety benchmarks along six core harm dimensions. Leveraging UMAP dimensionality reduction, K-means clustering (silhouette score: 0.470), and multi-benchmark semantic contrast modeling, we identify pronounced domain preferences (e.g., GretelAI favors privacy harms; WildGuardMix emphasizes self-harm) and data biases (e.g., imbalanced prompt-length distributions). Results reveal severe class imbalance across the six harm categories and high inter-benchmark semantic orthogonality—indicating that superficially similar benchmarks exhibit substantial coverage gaps. Our work delivers a reproducible, interpretable diagnostic tool for assessing benchmark coverage in AI safety evaluation, establishing a methodological foundation for developing more comprehensive, transparent, and targeted safety evaluation datasets.

Technology Category

Application Category

📝 Abstract
Various AI safety datasets have been developed to measure LLMs against evolving interpretations of harm. Our evaluation of five recently published open-source safety benchmarks reveals distinct semantic clusters using UMAP dimensionality reduction and kmeans clustering (silhouette score: 0.470). We identify six primary harm categories with varying benchmark representation. GretelAI, for example, focuses heavily on privacy concerns, while WildGuardMix emphasizes self-harm scenarios. Significant differences in prompt length distribution suggests confounds to data collection and interpretations of harm as well as offer possible context. Our analysis quantifies benchmark orthogonality among AI benchmarks, allowing for transparency in coverage gaps despite topical similarities. Our quantitative framework for analyzing semantic orthogonality across safety benchmarks enables more targeted development of datasets that comprehensively address the evolving landscape of harms in AI use, however that is defined in the future.
Problem

Research questions and friction points this paper is trying to address.

Analyzing semantic differences among AI safety benchmarks
Identifying gaps in harm coverage across safety datasets
Quantifying benchmark orthogonality to improve dataset development
Innovation

Methods, ideas, or system contributions that make the work stand out.

UMAP and kmeans for semantic clustering
Identify six primary harm categories
Quantify benchmark orthogonality for transparency
🔎 Similar Papers
No similar papers found.
J
Jonathan Bennion
The Objective AI, USA
S
Shaona Ghosh
Nvidia, USA
M
Mantek Singh
Google, USA
Nouha Dziri
Nouha Dziri
Allen Institute for AI (Ai2)
Artificial IntelligenceNatural Language Processing