- Published paper: 'Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding', NeurIPS, 2025;
- Published paper: 'Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models', Neurips, 2024;
- Published paper: 'Fine-Tuning Discrete Diffusion Models via Reward Optimization with Applications to DNA and Protein Design', ICLR, 2025;
- Published paper: 'Pessimistic model-based offline rl: Pac bounds and posterior sampling under partial coverage', ICLR, 2022.
Research Experience
- Member of the technical staff (research scientist) at Evolutionary Scale, time not specifically mentioned
- Ph.D. student, Department of Computer Science, Cornell University, 2020-2023
Education
- Ph.D., Department of Computer Science, Cornell University, 2020-2023. His Ph.D. research focused on the algorithmic foundations of reinforcement learning.
Background
- Research Interests: Large-scale multimodal generative (diffusion + language) models for science, test-time RL/search techniques in diffusion models, RL–based post-training in diffusion models, RL for robotics, etc.
- Personal Introduction: Member of the technical staff (research scientist) at Evolutionary Scale. From Japan.
Miscellany
- Personal Email: ueharamasatoshi@gmail.com
- Social Media Links: Google Scholar, CV, Github, LinkedIn