- Computer-use Agents@ICML 2025: Universal Retrieval for Multimodal Trajectory Modeling
- VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents
- StructEval: Benchmarking LLMs’ Capabilities to Generate Structural Outputs
- ICLR 2025: VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
- MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks
- ICASSP 2025: Knowledge Enhanced Multi-Domain Recommendations in an AI Assistant Application
- LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs
- NeurIPS 2024: MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark
- EMNLP 2024: VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation
- EMNLP 2024 (Findings): Semi-Supervised Reward Modeling via Iterative Self-Training
- NAACL 2024 (Findings): RecMind: Large Language Model Powered Agent For Recommendation
- EMNLP 2023 (Industry): Graph Meets LLM: A Novel Approach to Collaborative Filtering for Robust Conversational Understanding
- Gen-IR@SIGIR 2023: PALR: Personalization Aware LLMs for Recommendation
Research Experience
Current: Ph.D. student in the NLP Lab at UCSB. Previously, worked as a full-time applied scientist at Amazon Alexa AI and Amazon AGI.
Education
Ph.D.: University of California, Santa Barbara, Computer Science, Advisor: Prof. Shiyu Chang; Master's Degree: Johns Hopkins University; Bachelor's Degree: Nanjing University.
Background
Research Interests: Natural language processing, information retrieval, and speech recognition. Background: Ziyan Jiang is a Ph.D. student in Computer Science at the University of California, Santa Barbara (UCSB), advised by Prof. Shiyu Chang. Prior to this, he earned his bachelor's degree from Nanjing University and his master's degree from Johns Hopkins University. He also spent several years as a full-time applied scientist at Amazon Alexa AI and Amazon AGI.
Miscellany
Moved personal homepage to a new site; the old one is now deprecated.