Scholar
Didi Zhu
Google Scholar ID: gthqIqIAAAAJ
Imperial College London
Multi-Modal LLMs
Out of Distribution Generalization
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
352
H-index
11
i10-index
15
Publications
20
Co-authors
5
list available
Contact
Twitter
Open ↗
GitHub
Open ↗
Publications
12 items
Watch Wider and Think Deeper: Collaborative Cross-modal Chain-of-Thought for Complex Visual Reasoning
arXiv.org · 2026
Cited
0
OmniEduBench: A Comprehensive Chinese Benchmark for Evaluating Large Language Models in Education
2025
Cited
0
Noise Projection: Closing the Prompt-Agnostic Gap Behind Text-to-Image Misalignment in Diffusion Models
2025
Cited
0
FedEve: On Bridging the Client Drift and Period Drift for Cross-device Federated Learning
2025
Cited
0
Will LLMs Scaling Hit the Wall? Breaking Barriers via Distributed Resources on Massive Edge Devices
2025
Cited
0
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model
2025
Cited
0
Generative Artificial Intelligence in Robotic Manipulation: A Survey
2025
Cited
0
Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging
2025
Cited
0
Load more
Resume (English only)
Co-authors
5 total
chao wu
Zhejiang University
Kun Kuang
Zhejiang University
Fei Wu
Professor of Computer Science, Zhejiang University
Stefanos Zafeiriou
Professor, Imperial College London
Jiankang Deng
Imperial College London
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up