Agneet Chatterjee
Scholar

Agneet Chatterjee

Google Scholar ID: RGRaOegAAAAJ
Arizona State University
Computer VisionMachine Learning
Citations & Impact
All-time
Citations
451
 
H-index
14
 
i10-index
15
 
Publications
20
 
Co-authors
13
list available
Resume (English only)
Academic Achievements
  • Publications:
  • - Stable Cinemetrics: Structured Taxonomy and Evaluation for Professional Video Generation (NeurIPS 2025)
  • - AcT2I: Evaluating and Improving Action Depiction in Text-to-Image Models (EMNLP 2025)
  • - Dual Caption Preference Optimization for Diffusion Models (TMLR)
  • - TextInVision: Text and Prompt Complexity Driven Visual Text Generation Benchmark (CVPR BEAM Workshop 2025, Best Paper Award)
  • - Getting it Right: Improving Spatial Consistency in Text-to-Image Models (ECCV 2024)
  • - REVISION: Rendering Tools Enable Spatial Fidelity in Vision-Language Models (ECCV 2024)
  • - On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation (CVPR 2024)
  • - Evaluating Multimodal Large Language Models Across Distribution Shifts and Augmentations (CVPR 2024 Workshop - Evaluation of Generative Foundation Models)
  • - Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation (NAACL TrustNLP 2025 Workshop)
  • - Accelerating LLM Inference by Enabling Intermediate Layer Decoding (NAACL 2024 (Findings))
  • Awards:
  • - Best Paper Award at CVPR BEAM Workshop 2025
  • - SCAI Doctoral Fellowship
  • - SCAI Engineering Fellowship
Research Experience
  • Worked as a software engineer at Salesforce; served as a student researcher at Stability AI and LLNL; joined Stability AI as a Research Scientist Intern in January 2025 to work on video generative models.
Education
  • Received Bachelor's in Computer Science from Jadavpur University in 2019; started PhD at Arizona State University in January 2023, advised by Chitta Baral and Yezhou Yang.
Background
  • PhD student in Computer Science, with research interests in developing controllable image and video generative models.
Miscellany
  • This website's source code is borrowed from Jon Barron. Last Updated: September 2025.