Paper 'Vision Language Models are Blind' accepted as Oral at ACCV 2024, exposing fundamental failures of VLMs like GPT-4o on simple visual tasks; featured by OpenAI, TechCrunch, and Ars Technica
Paper 'Understanding Generative AI Capabilities in Everyday Image Editing Tasks' accepted to CVPR 2025 Workshop, comparing multimodal image-editing models against human editors
ArXiv preprint 'HoT: Highlighted Chain of Thought for Referencing Supporting Facts from Inputs' proposes a method to improve LLM accuracy and user experience via visual chain-of-thought highlighting
Awarded Auburn Undergraduate Research Fellowship
Received Honorable Mention for Outstanding Undergraduate Researcher Award from the Computing Research Association