Zhaoxuan Wu
Scholar

Zhaoxuan Wu

Google Scholar ID: Th_mPm8AAAAJ
National University of Singapore
Citations & Impact
All-time
Citations
415
 
H-index
9
 
i10-index
9
 
Publications
20
 
Co-authors
26
list available
Publications
20 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Publications include:
  • - 'Incentivizing Time-Aware Fairness in Data Sharing', NeurIPS-25
  • - 'Position Paper: Uncovering Scaling Laws for Large Language Models via Inverse Problems', EMNLP-25 Findings
  • - 'MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents', MTI-LLM @ NeurIPS-25 - Oral & Top 1% of papers
  • - 'TETRIS: Optimal Draft Token Selection for Batch Speculative Decoding', ACL-25
  • - 'Group-robust Sample Reweighting for Subpopulation Shifts via Influence Functions', ICLR-25
  • - 'Paid with Models: Optimal Contract Design for Collaborative Machine Learning', AAAI-25 - Oral
  • - 'Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars', NeurIPS-24
  • - 'Localized Zeroth-Order Prompt Optimization'
Research Experience
  • Currently a Postdoctoral Associate at the Singapore-MIT Alliance for Research and Technology (SMART), working with Prof. Daniela Rus and Assoc. Prof. Bryan Kian Hsiang Low.
Education
  • Ph.D. in Data Science from the National University of Singapore (NUS), supervised by Assoc. Prof. Bryan Kian Hsiang Low, 2024; Bachelor of Science (Honors) in Data Science & Analytics and a minor in Computer Science from NUS, 2020. Ph.D. supported by the President’s Graduate Fellowship jointly offered by the NUS Graduate School Integrative Sciences and Engineering Programme (ISEP) and the Institute of Data Science (IDS), and the Singapore Data Science Consortium (SDSC) Dissertation Research Fellowship.
Background
  • Research interests include, but are not limited to: data-centric AI (e.g., data valuation & selection, collaborative machine learning, incentives, fairness), resource-efficient machine learning (e.g., Bayesian optimization), large language models (e.g., inference-time techniques, prompting), and deep learning & applications.