Five Years of SciCap: What We Learned and Future Directions for Scientific Figure Captioning

๐Ÿ“… 2025-12-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the problem of automatic scientific figure caption generation. To overcome limitations in domain adaptability, evaluation reliability, and practical deployment of existing methods, we introduce the first large-scale, domain-diverse arXiv figureโ€“caption dataset. Methodologically, we propose a vision-language joint modeling framework featuring a two-stage LLM-enhanced generation-and-editing pipeline, integrating domain-adaptive pretraining, multimodal alignment, and structured figure parsing. For evaluation, we establish a dual-track assessment protocol combining automated metrics with expert human review, and develop a human-in-the-loop annotation platform alongside an interactive scientist-facing writing assistant. Key contributions include: (1) an openly maintained, continuously updated corpus; (2) five editions of the international Scientific Figure Captioning Challenge; (3) production-ready assistive tools; and (4) a distilled taxonomy of five fundamental challenges and a forward-looking research agenda for next-generation scientific captioning systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Between 2021 and 2025, the SciCap project grew from a small seed-funded idea at The Pennsylvania State University (Penn State) into one of the central efforts shaping the scientific figure-captioning landscape. Supported by a Penn State seed grant, Adobe, and the Alfred P. Sloan Foundation, what began as our attempt to test whether domain-specific training, which was successful in text models like SciBERT, could also work for figure captions expanded into a multi-institution collaboration. Over these five years, we curated, released, and continually updated a large collection of figure-caption pairs from arXiv papers, conducted extensive automatic and human evaluations on both generated and author-written captions, navigated the rapid rise of large language models (LLMs), launched annual challenges, and built interactive systems that help scientists write better captions. In this piece, we look back at the first five years of SciCap and summarize the key technical and methodological lessons we learned. We then outline five major unsolved challenges and propose directions for the next phase of research in scientific figure captioning.
Problem

Research questions and friction points this paper is trying to address.

Developing domain-specific training methods for scientific figure captioning
Creating and evaluating large datasets of figure-caption pairs from arXiv
Addressing unsolved challenges in automated scientific caption generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain-specific training for figure captioning
Large-scale dataset curation from arXiv papers
Interactive systems to improve caption writing
๐Ÿ”Ž Similar Papers
No similar papers found.
T
Ting-Hao K. Huang
The Pennsylvania State University, 201 Old Main, University Park, PA, USA
Ryan A. Rossi
Ryan A. Rossi
Adobe Research
Machine LearningPersonalizationGraph Representation LearningGraph MLGraph Theory
Sungchul Kim
Sungchul Kim
Adobe
Data miningMachine learningBioinformatics
Tong Yu
Tong Yu
Adobe Research
T
Ting-Yao E. Hsu
The Pennsylvania State University, 201 Old Main, University Park, PA, USA
H
Ho Yin Ng
The Pennsylvania State University, 201 Old Main, University Park, PA, USA
C
C. Lee Giles
The Pennsylvania State University, 201 Old Main, University Park, PA, USA