Yong Liu
Scholar

Yong Liu

Google Scholar ID: 2ejuK8UAAAAJ
National University of Singapore
Machine LearningReinforcement Learning
Citations & Impact
All-time
Citations
1,087
 
H-index
11
 
i10-index
11
 
Publications
20
 
Co-authors
0
 
Contact
No contact links provided.
Resume (English only)
Academic Achievements
  • NeurIPS 2025: 'Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning'
  • ICML 2025: Two papers including 'SeedLoRA: A Fusion Approach to Efficient LLM Fine-Tuning' and 'MERIT: Maximum-normalized Element-wise Ratio for Language Model Large-batch Training'
  • WWW 2024: One paper accepted
  • NeurIPS 2022: 'Random Sharpness-Aware Minimization'
  • CVPR 2022: 'Towards Efficient and Scalable Sharpness-Aware Minimization'
  • ICLR 2022: 'Concurrent Adversarial Learning for Large-Batch Training'
  • AAAI 2022: Contributed to 'Go Wider Instead of Deeper'
  • ICASSP 2021: Published work on a quantitative metric for privacy leakage in federated learning
  • IJCAI 2019: Proposed value function transfer for deep multi-agent reinforcement learning based on N-step returns
  • AAAI 2020: Introduced a novel game abstraction method using graph attention neural networks
Background
  • Research interests include Large-Batch Training, Multi-Agent Systems, Reinforcement Learning, and Transfer Learning
  • Focuses on Large-Batch Training on large-scale distributed systems to accelerate deep neural network training
  • Works on simplifying the learning process in multi-agent systems, e.g., through game abstraction
  • Studies algorithmic frameworks of reinforcement learning and their applications in multi-agent settings
  • Explores transfer learning in multi-agent systems, especially across environments with different numbers of agents
Co-authors
0 total
Co-authors: 0 (list not available)