๐ค AI Summary
This work addresses the problem of automatic scientific figure caption generation. To overcome limitations in domain adaptability, evaluation reliability, and practical deployment of existing methods, we introduce the first large-scale, domain-diverse arXiv figureโcaption dataset. Methodologically, we propose a vision-language joint modeling framework featuring a two-stage LLM-enhanced generation-and-editing pipeline, integrating domain-adaptive pretraining, multimodal alignment, and structured figure parsing. For evaluation, we establish a dual-track assessment protocol combining automated metrics with expert human review, and develop a human-in-the-loop annotation platform alongside an interactive scientist-facing writing assistant. Key contributions include: (1) an openly maintained, continuously updated corpus; (2) five editions of the international Scientific Figure Captioning Challenge; (3) production-ready assistive tools; and (4) a distilled taxonomy of five fundamental challenges and a forward-looking research agenda for next-generation scientific captioning systems.
๐ Abstract
Between 2021 and 2025, the SciCap project grew from a small seed-funded idea at The Pennsylvania State University (Penn State) into one of the central efforts shaping the scientific figure-captioning landscape. Supported by a Penn State seed grant, Adobe, and the Alfred P. Sloan Foundation, what began as our attempt to test whether domain-specific training, which was successful in text models like SciBERT, could also work for figure captions expanded into a multi-institution collaboration. Over these five years, we curated, released, and continually updated a large collection of figure-caption pairs from arXiv papers, conducted extensive automatic and human evaluations on both generated and author-written captions, navigated the rapid rise of large language models (LLMs), launched annual challenges, and built interactive systems that help scientists write better captions. In this piece, we look back at the first five years of SciCap and summarize the key technical and methodological lessons we learned. We then outline five major unsolved challenges and propose directions for the next phase of research in scientific figure captioning.