Andrew Bai
Scholar

Andrew Bai

Google Scholar ID: iCCjJwcAAAAJ
PhD student at Computer Science department, UCLA
machine learningXAIinterpretability
Citations & Impact
All-time
Citations
192
 
H-index
7
 
i10-index
4
 
Publications
18
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Published several papers, including 'Concepts or Skills? Rethinking Instruction Selection for Multi-modal Models' (under submission review, 2025), 'On the Loss of Context-awareness in General Instruction Fine-tuning' (under submission review, 2025), 'An Efficient Rehearsal Scheme for Catastrophic Forgetting Mitigation during Multi-stage Fine-tuning' (Findings of NAACL 2025), 'Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation' (TMLR, June 2024), 'Concept Gradient: Concept-based Interpretation Without Linear Assumption' (ICLR 2023), 'Reducing Training Sample Memorization in GANs by Training with Memorization Rejection' (arXiv preprint, 2022), 'On training sample memorization: Lessons from benchmarking generative modeling with a large-scale competition' (KDD 2021), and 'Efficient system verification with multiple weakly-hard constraints for runtime monitoring' (RV 2020).
Research Experience
  • Research experience includes data selection, practical interpretation methods for black-box models, reward modeling, long video generation, LLM agents, instruction selection, diffusion model data memorization, and prompt optimization.
Education
  • PhD student in Computer Science at UCLA, advised by Cho-Jui Hsieh; Bachelor's degree in Computer Science from National Taiwan University, worked with Prof. Hsuan-Tien Lin on generative modeling and time series forecasting, and with Prof. Chung-Wei Lin on system verification and falsification.
Background
  • Research interests include studying the memorization and forgetting mechanisms when training machine learning models, particularly in the context of LLM post-training applications. Recent projects investigate why fine-tuning LLM with RLHF leads to less forgetting compared to supervised fine-tuning.
Miscellany
  • Favorite part of research is collaborating and engaging in insightful discussions with fellow researchers.
Co-authors
0 total
Co-authors: 0 (list not available)