“Localizing Knowledge in Diffusion Transformers,” NeurIPS 2025 (First author): Proposes a method to localize knowledge encoding within Diffusion Transformers for interpretable and efficient model editing.
“Improving Compositional Attribute Binding in Text-to-Image Generative Models via Enhanced Text Embeddings,” Under Review (Co-first author): Demonstrates significant compositional improvements by fine-tuning a linear projection on CLIP’s representation space.
“Enhancing Epileptic Seizure Detection with EEG Feature Embeddings,” BioCAS 2023 (Oral, First author): Achieves state-of-the-art performance with 100% event-based sensitivity and 99% specificity on CHB-MIT dataset.
“A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection,” CSICC 2023 (Co-first author, Oral): Enhances adversarial robustness by detecting and removing challenging samples during training.
Contributed to PhytoOracle project published in Frontiers in Plant Science 2023: Developed modular, scalable pipelines for phenomics data processing.
“Your Out-of-Distribution Detection Method is Not Robust!,” NeurIPS 2022: Exposes vulnerabilities in OOD detection methods and proposes the robust Adversarially Trained Discriminator (ATD).