Paper 'FairProof: Confidential and Certifiable Fairness for Neural Networks' won the TensorOpera-FedML Best Paper Award at Privacy-ILR Workshop, ICLR 2024
Paper 'Cold Case: The Lost MNIST Digits' received a Spotlight Oral Presentation (~2.9%) at NeurIPS 2019
Paper 'On the design of CNNs for automatic detection of Alzheimer’s disease' received Best Paper Honorable Mention at NeurIPS 2019 ML for Health Workshop
Received the 'Contributions to Diversity' Award from CSE Dept, UCSD in June 2024
Selected as a Rising Star in Data Science 2024
Serving on the Program Committee of SaTML'25
Published multiple papers at top venues including ICML 2024, NeurIPS 2024, IJCAI 2024, NAACL 2023, and ECML PKDD 2021 on topics such as interpretability, model auditing, fairness, and unlearning
Background
AI researcher broadly interested in the foundations of Trustworthy AI
Focuses on AI Privacy, Security & Safety
Aims to make AI systems accountable and incentive-aware by exposing vulnerabilities in existing Trustworthy AI tools (e.g., unlearning, attribution, XAI)
Develops trustless verification systems using cryptographic tools like Zero-Knowledge Proofs
Studies auditing of closed models both theoretically and practically
Proposes evaluation frameworks and metrics for trustworthy AI