Published several papers, including 'Training the Untrainable: Introducing Inductive Bias via Representational Alignment' (NeurIPS 2025), 'Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains' (ICLR 2025), and 'Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli' (NeurIPS Datasets and Benchmarks Track 2024, Oral, Top 1%). Supported by the NSF Graduate Research Fellowship and the Robert J Shillman (1974) Fund Fellowship.
Research Experience
Worked as a research assistant at the MIT Infolab, focusing on problems in deep learning, multimodal processing, and computational neuroscience. Collaborated with Shuang Li, Yilun Du, Igor Mordatch, and Antonio Torralba on generative modeling during his MEng.
Education
Obtained a Bachelor's degree in Computer Science from MIT in 2023 and an MEng in Computer Science from MIT in 2024. During his PhD, he is advised by Boris Katz.
Background
Research interests include deep learning architecture design, representational alignment between neural networks, and designing new training algorithms for neural networks. Also focuses on neural networks with good theoretical/compute properties that are difficult to optimize.