Our papers 'Fair Continuous Resource Allocation with Learning of Impact' and 'Managing the Repercussions of Machine Learning Applications' have been accepted at NeurIPS. Our paper 'Reinforcement Learning from Human Feedback with High-Confidence Safety Guarantees' has been accepted at RLC. Our paper 'Analyzing the Relationship Between Difference and Ratio-Based Fairness Metrics' has been accepted at FAccT. Our paper 'Fairness Guarantees under Demographic Shift' has been accepted at ICLR 2022.
Research Experience
I am a postdoctoral fellow at Princeton University's Center for Information Technology Policy, under the guidance of Aleksandra Korolova. I worked on the Responsible AI Team at Facebook AI Research, with Nicolas Le Roux at MSR FATE Montréal, and with Dennis Wei and Karthi Ramamurthy in the Trustworthy AI group at IBM.
Education
I completed my Ph.D. at the University of Massachusetts, where I was advised by Phil Thomas in the Autonomous Learning Lab. I earned my bachelor's degree in Computer Science and Mathematics from the University of Maryland Baltimore County, where I also competed as a track & field athlete.
Background
My research in machine learning (ML) focuses on sequential learning, with specific interests in: Interactive Learning, including bandits and reinforcement learning, with applications to large language models; Responsible ML, with a focus on provable fairness & safety guarantees; Reliable ML, with a focus on robustness to challenges in data integrity. Previously, I was a research intern at IBM, Microsoft Research, and Meta, where I focused on issues related to algorithmic fairness in ML.