Published 'Emergent Reciprocity and Team Formation from Randomized Uncertain Social Preferences' at NeurIPS 2020; paper 'Designing Neural Network Architectures Using Reinforcement Learning' accepted by ICLR 2017; master's thesis 'Towards Practical Neural Network Meta-Modeling' awarded the Second Place Charles and Jennifer Johnson Computer Science MEng Thesis Award from MIT’s EECS department.
Research Experience
A research scientist on OpenAI's Multi-Agent Team, previously worked with the robotics team at OpenAI. Research projects include emergent reciprocity and team formation in reinforcement learning agents, progressively more complex strategy and tool use from multi-agent hide-and-seek, etc.
Education
M.Eng. in Electrical Engineering and Computer Science with a focus in AI from MIT, and also a B.S. in EECS and Physics from MIT.
Background
Interested in environments that allow for unbounded learning, multi-agent reinforcement learning and social dilemmas, and generalization to unseen environments (e.g. simulation to reality). While at OpenAI, he has worked on emergence from multi-agent autocurricula, state estimation from vision, attention-based network architectures for reinforcement learning, and most recently multi-agent social dilemma games.