Paper 'DecodingTrust' received NeurIPS 2023 Outstanding Paper Award and the Cybersecurity Award for 'Best Machine Learning and Security Paper'.
Core contributor to 'Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models' (2025).
Core contributor to 'Cosmos-Reason1: From Physical Common Sense To Embodied Reasoning' (2024).
Contributed to 'NVLM: Open Frontier-Class Multimodal LLMs' (2024).
Published multiple papers at top-tier conferences (NeurIPS, ICML, EMNLP, NAACL) on topics including RAG, federated learning, video-language modeling, and detoxification of LLMs.
Organized the 'Trustworthy and Reliable Large-Scale Machine Learning Models' workshop at ICLR 2023.