Scholar
Yida Lu
Google Scholar ID: n5GuEDAAAAAJ
Tsinghua University, CoAI Group
NLP
AI Safety & Alignment
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
139
H-index
6
i10-index
4
Publications
11
Co-authors
11
list available
Contact
No contact links provided.
Publications
9 items
Survive at All Costs: Exploring LLM's Risky Behaviors under Survival Pressure
2026
Cited
0
The Missing Half: Unveiling Training-time Implicit Safety Risks Beyond Deployment
2026
Cited
0
The Side Effects of Being Smart: Safety Risks in MLLMs'Multi-Image Reasoning
2026
Cited
0
ShieldVLM: Safeguarding the Multimodal Implicit Toxicity via Deliberative Reasoning with LVLMs
2025
Cited
0
VPO: Aligning Text-to-Video Generation Models with Prompt Optimization
2025
Cited
0
LongSafety: Evaluating Long-Context Safety of Large Language Models
2025
Cited
0
AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement
2025
Cited
0
Agent-SafetyBench: Evaluating the Safety of LLM Agents
arXiv.org · 2024
Cited
10
Load more
Resume (English only)
Co-authors
11 total
Hongning Wang
Associate Professor, Department of Computer Science and Technology, Tsinghua University
Minlie Huang
Tsinghua University
Zhexin Zhang
Tsinghua University, CoAI Group
Shiyao Cui
Tsinghua University
Xiaotao Gu
Zhipu AI
Jiale Cheng
phd student in Tsinghua University
Junxiao Yang
Tsinghua University
Xiao Liu
Tsinghua University
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up