Published a paper on using interpretability techniques based on generative image models to audit COVID-19 deep learning classifiers, proposing changes to dataset construction to improve generalization. The work was featured in Nature Machine Intelligence and discussed in an Outlook piece in Nature.
Research Experience
Developed methods for AI interpretability and robustness during his PhD, which were applied across various fields such as computer vision, NLP, and biology. Additionally, conducted research on auditing radiology vision models using interpretability techniques based on generative image models.
Education
PhD in Computer Science and Engineering from the University of Washington; Currently a radiology resident at Stanford University.
Background
Physician scientist (MD/PhD) working on building safe and reliable AI systems for medicine and biotech. During his PhD (Computer Science and Engineering @ UW), he developed methods for AI interpretability and robustness, with applications in computer vision (radiology, dermatology), natural language processing, and biology (bulk/single-cell transcriptomics). Outside of research, he also does consulting work on projects at the intersection of AI/medicine/biology. He is currently a radiology resident at Stanford.
Miscellany
Personal interests include sharing his research practice outcomes on his blog.