Ryan Koo
Scholar

Ryan Koo

Google Scholar ID: PS7Qw2gAAAAJ
University of Minnesota
Machine LearningNeural NetworksNatural Language Processing
Citations & Impact
All-time
Citations
297
 
H-index
4
 
i10-index
3
 
Publications
8
 
Co-authors
6
list available
Resume (English only)
Academic Achievements
  • Learning Explainable Dense Reward Shapes via Bayesian Optimization (Preprint, 2025)
  • Dynamic Multi-Reward Weighting for Multi-Style Controllable Generation (EMNLP Main, 2024)
  • Benchmarking Cognitive Biases in Large Language Models as Evaluators (ACL Findings, 2024)
  • Meta-Crafting: Improved Detection of Out-of-Distributed Texts via Crafting Metadata Space (AAAI Student Abstract, 2024)
  • CoEdIT: Text Editing by Task-Specific Instruction Tuning (EMNLP Findings, 2023)
Research Experience
  • Conducting research at the MinnesotaNLP lab, focusing on developing new methods of modeling heterogeneous rewards through dense reward shaping or defining new, more grounded reward functions for language model tuning.
Education
  • Master's degree, University of Minnesota, Advisor: Prof. Dongyeop Kang, Research Focus: Natural Language Processing and Reinforcement Learning
Background
  • Currently a Masters student in Computer Science at the University of Minnesota, with research interests at the intersection of natural language processing and reinforcement learning methods, and other post-training methods, particularly for alignment tasks.
Miscellany
  • Currently applying for Ph.D. positions.