1. The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs
2. A Graph Talks, But Who's Listening? Rethinking Evaluations for Graph-Language Models
3. A Cognac Shot To Forget Bad Memories: Corrective Unlearning for GNNs
4. Topo Goes Political: TDA-Based Controversy Detection in Imbalanced Reddit Political Data (Best Paper Award)
5. Higher Order Structures for Graph Explanations
Research Experience
My current research focuses on understanding, evaluating, and improving the capabilities of AI systems, particularly large language models. During my undergraduate studies, I worked on problems in interpretability and machine unlearning, with a focus on graph neural networks.
Education
Graduated with a Bachelor's degree in Computer Science and Engineering with Honors from IIIT Hyderabad. As an undergraduate, I worked on machine learning with graphs under the supervision of Ponnurangam Kumaraguru at the Precog lab.
Background
Currently an M.Phil student at the University of Cambridge studying Machine Learning and Machine Intelligence (MLMI). My research focuses on understanding, evaluating, and improving the capabilities of AI systems, primarily large language models. My ultimate goal is to help build AI that can reliably and autonomously perform complex tasks for extremely long periods without human intervention.
Miscellany
Contact: Email / GitHub / Google Scholar / Twitter / LinkedIn / CV / Research Blog