🤖 AI Summary
This paper introduces the novel task of “Scientific Lecture Citation Prediction” (RPT), which aims to automatically recommend supportive references from spoken academic talks. To support this task, we construct Talk2Ref—the first large-scale, human-annotated dataset comprising 6,279 scientific talks and 43,429 cited papers. Methodologically, we propose a dual-encoder architecture enhanced by domain-adaptive pretraining and fine-tuning, and investigate techniques for processing long lecture transcripts, including sliding-window truncation. Experiments on Talk2Ref demonstrate that our model significantly outperforms zero-shot baselines, validating both the dataset’s quality and the efficacy of our approach. All data and code are publicly released, establishing a foundational resource and technical paradigm for scientific content understanding and automated literature services.
📝 Abstract
Scientific talks are a growing medium for disseminating research, and automatically identifying relevant literature that grounds or enriches a talk would be highly valuable for researchers and students alike. We introduce Reference Prediction from Talks (RPT), a new task that maps long, and unstructured scientific presentations to relevant papers. To support research on RPT, we present Talk2Ref, the first large-scale dataset of its kind, containing 6,279 talks and 43,429 cited papers (26 per talk on average), where relevance is approximated by the papers cited in the talk's corresponding source publication. We establish strong baselines by evaluating state-of-the-art text embedding models in zero-shot retrieval scenarios, and propose a dual-encoder architecture trained on Talk2Ref. We further explore strategies for handling long transcripts, as well as training for domain adaptation. Our results show that fine-tuning on Talk2Ref significantly improves citation prediction performance, demonstrating both the challenges of the task and the effectiveness of our dataset for learning semantic representations from spoken scientific content. The dataset and trained models are released under an open license to foster future research on integrating spoken scientific communication into citation recommendation systems.