Invited talk on LLM Governance and Alignment at NAACL TrustNLP Workshop; Participation in National Academy of Sciences Decadal Survey on Reliable AI-assisted Decision Making; Talk on Uncertainty Calibration and AI-assisted Decision Making at KDD's Workshop on Uncertainty Reasoning and Quantification in Decision Making; Panel participation and talk on Generative AI and Safety at DSHealth Workshop, KDD; Panel participation on Trustworthy LLMs at AI for Open Society Day, KDD.
Research Experience
His current projects are focused on the governance and safety of large language models (LLMs), aiming to establish both theoretical frameworks and practical systems that ensure these models are reliable and trustworthy. He has played a significant role in the development of several well-known open-source trustworthy AI toolkits, including AI Fairness 360, AI Explainability 360, and Uncertainty Quantification 360.
Background
He is a Principal Research Scientist at IBM Research AI and the MIT-IBM Watson AI Lab, where his primary focus is on developing reliable AI solutions. His research interests encompass areas such as generative modeling, uncertainty quantification, and learning with limited data.