Co-organizer for REALM (Research on Agent Language Models) at ACL 2025; Gave talks about Building the learning-from-interaction pipeline for LLMs at institutions like Together AI, MIT, Harvard, and Brown; Organized a workshop at NeurIPS 2024 on 'System-2 Generalization at Scale'; Invited to lead a session on 'Intelligent Agents' at Foundation Capital AI Unconference, 2024 in San Francisco; Published several papers covering topics such as unsupervised learning, guided exploration, encoding recursive structure, etc.
Research Experience
Part-time visitor at ServiceNow Research from Oct 2024 to Dec 2024, under the guidance of Alexandre Lacoste and Dzmitry Bahdanau, working on post-training for LLM browser agents; Research Intern at DeepMind from June 2023 to Feb 2024, mentored by Mandar Joshi, Kenton Lee, and Pete Shaw, focusing on unsupervised browser control with LLMs; Research Intern at Microsoft Research during Summer 2022, supervised by Marco Tulio Ribeiro and Scott Lundberg, working on fixing model bugs with language feedback.
Education
Ph.D. in Computer Science at Stanford University from 2019 to present, advised by Prof. Christopher D. Manning; B.Tech in Electrical Engineering from Indian Institute of Technology, New Delhi from 2013 to 2017, with a thesis on Inference over Knowledge Bases with Deep Learning.
Background
A final year CS PhD candidate working on Deep Learning and NLP, focusing on building LLMs that can generalize out-of-distribution through structured inductive biases or by interacting with their environments.
Miscellany
Blog: https://skylerhallinan.com/; Google Scholar profile available; GitHub: MurtyShikhar; Twitter: @ShikharMurty