1. Published work on integrating symbolic methods with large language models, introducing a new API that bridges classical programming and differentiable programming.
2. Proposed a novel approach to address parameter choice issues in unsupervised domain adaptation, which outperformed existing benchmark methods across several types of datasets.
3. Introduced the concept of reactive exploration in the context of lifelong reinforcement learning, demonstrating through experiments that policy gradient methods perform better under such conditions.
Research Experience
1. Worked on integrating symbolic methods and large language models, proposing a new framework that uses neural networks, specifically LLMs, at its core, and composes operations based on task-specific zero-shot or few-shot prompting.
2. Researched the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, proposing an extension of weighted least squares to vector-valued functions, and conducted a large-scale empirical comparative study across multiple datasets.
3. Explored reactive exploration strategies for dealing with continuous domain shifts in lifelong reinforcement learning, showing that policy gradient methods adapt more quickly to distribution shifts than Q-learning.
Background
Research interests include a neuro-symbolic perspective on Large Language Models (LLMs), addressing parameter choice issues in unsupervised domain adaptation, and coping with non-stationarity in lifelong reinforcement learning. The main focus is on combining symbolic approaches with deep learning techniques to solve complex problems.