DeepSight: An All-in-One LM Safety Toolkit

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current safety pipelines for large models suffer from fragmented evaluation, diagnostic, and alignment tools, making it difficult to pinpoint the root causes of risks and lacking mechanistic interpretability. To address this, this work proposes DeepSight—the first open-source, white-box security analysis framework that unifies assessment and diagnosis through a standardized task and data protocol. DeepSight integrates behavioral risk evaluation (DeepSafe) with internal mechanism diagnostics (DeepScan), enabling a cohesive approach to safety analysis. The framework supports both large language models and multimodal large models, offering low computational overhead, high reproducibility, and strong extensibility. It significantly enhances the depth of security analysis and the precision of diagnostics while preserving the model’s general capabilities.

Technology Category

Application Category

📝 Abstract
As the development of Large Models (LMs) progresses rapidly, their safety is also a priority. In current Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) safety workflow, evaluation, diagnosis, and alignment are often handled by separate tools. Specifically, safety evaluation can only locate external behavioral risks but cannot figure out internal root causes. Meanwhile, safety diagnosis often drifts from concrete risk scenarios and remains at the explainable level. In this way, safety alignment lack dedicated explanations of changes in internal mechanisms, potentially degrading general capabilities. To systematically address these issues, we propose an open-source project, namely DeepSight, to practice a new safety evaluation-diagnosis integrated paradigm. DeepSight is low-cost, reproducible, efficient, and highly scalable large-scale model safety evaluation project consisting of a evaluation toolkit DeepSafe and a diagnosis toolkit DeepScan. By unifying task and data protocols, we build a connection between the two stages and transform safety evaluation from black-box to white-box insight. Besides, DeepSight is the first open source toolkit that support the frontier AI risk evaluation and joint safety evaluation and diagnosis.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Safety Evaluation
Safety Diagnosis
Multimodal Large Language Models
Model Alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

integrated safety paradigm
white-box evaluation
open-source safety toolkit
root cause diagnosis
multimodal large language models
🔎 Similar Papers
No similar papers found.
B
Bo Zhang
Shanghai Artificial Intelligence Laboratory
J
Jiaxuan Guo
Shanghai Artificial Intelligence Laboratory
Lijun Li
Lijun Li
Shanghai AI Lab
Computer visionLLM safety
D
Dongrui Liu
Shanghai Artificial Intelligence Laboratory
S
Sujin Chen
Shanghai Artificial Intelligence Laboratory
Guanxu Chen
Guanxu Chen
Shanghai Jiao Tong University
Trustworthy AIInterpretability
Z
Zhijie Zheng
Shanghai Artificial Intelligence Laboratory
Q
Qihao Lin
Shanghai Artificial Intelligence Laboratory
L
Lewen Yan
Shanghai Artificial Intelligence Laboratory
C
Chen Qian
Shanghai Artificial Intelligence Laboratory
Y
Yijin Zhou
Shanghai Artificial Intelligence Laboratory
Y
Yuyao Wu
Shanghai Artificial Intelligence Laboratory
S
Shaoxiong Guo
Shanghai Artificial Intelligence Laboratory
T
Tianyi Du
Shanghai Artificial Intelligence Laboratory
Jingyi Yang
Jingyi Yang
University of Science and Technology of China
Computer VisionDeep LearningAI AgentGenerative ModelsReinforcement Learning
X
Xuhao Hu
Shanghai Artificial Intelligence Laboratory
Z
Ziqi Miao
Shanghai Artificial Intelligence Laboratory
X
Xiaoya Lu
Shanghai Artificial Intelligence Laboratory
Jing Shao
Jing Shao
Research Scientist, Shanghai AI Laboratory/Shanghai Jiao Tong University
Computer VisionMulti-Modal Large Language Model
Xia Hu
Xia Hu
Google DeepMind
Deep LearningMachine LearningMultimodal