Paper: 'Sharpe Ratio-Guided Active Learning for Preference Optimization in RLHF' (COLM 2025)
Paper: 'Sample-Efficient Preference Alignment in LLMs via Active Exploration' (COLM 2025), framing preference learning as an active contextual dueling bandit problem; introduced new benchmarks: Jeopardy! and Haikus
Paper: 'Preference-Guided Diffusion for Multi-Objective Offline Optimization', proposing a dominance-based classifier-guided diffusion model for Pareto-optimal generation
Project: 'Antibiotic Discovery with Novel Mechanisms of Action Using Deep and Generative Models', combining GNNs and diffusion models, validated by wet lab experiments
Paper: 'Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes', the first active learning approach tailored for DGSM
IBM PhD Fellow
MIT Rising Stars in EECS
Background
Currently on the job market for research scientist roles in industry
Works at the intersection of AI, generative models, and scientific discovery
Recent focus on scalable methods for LLM alignment, active learning, and diffusion-based optimization
Applications include antibiotic discovery and high-throughput screening
Designs algorithms that reduce annotation and computation costs while improving performance in safety-critical, resource-constrained domains
Projects include preference-guided diffusion models, active exploration and efficient sampling for LLMs, and automated multi-objective pipelines for antibiotic discovery validated via wet lab experiments
Broad expertise in Bayesian optimization, uncertainty quantification, and efficient reasoning for sequential decision-making under uncertainty