Scholar
Yongcan Yu
Google Scholar ID: 9UTMYbkAAAAJ
Master Student, CASIA
Trustworthy AI
Safety in AI
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
60
H-index
3
i10-index
3
Publications
5
Co-authors
3
list available
Contact
No contact links provided.
Publications
7 items
Mitigating the Safety-utility Trade-off in LLM Alignment via Adaptive Safe Context Learning
2026
Cited
0
Do MLLMs Really Understand Space? A Mathematical Reasoning Evaluation
2026
Cited
0
One Size, Many Fits: Aligning Diverse Group-Wise Click Preferences in Large-Scale Advertising Image Generation
2026
Cited
1
Reassessing the Role of Supervised Fine-Tuning: An Empirical Study in VLM Reasoning
2025
Cited
0
Cooperative Pseudo Labeling for Unsupervised Federated Classification
2025
Cited
0
A Comprehensive Survey on Trustworthiness in Reasoning with Large Language Models
2025
Cited
0
Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models
2025
Cited
0
Resume (English only)
Co-authors
3 total
Jian Liang (梁坚)
NLPR, Institute of Automation, Chinese Academy of Sciences
Lijun Sheng
University of Science and Technology of China
Co-author 3
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up