🤖 AI Summary
This study addresses the prevalent inconsistency between methodological descriptions in bioinformatics papers and their corresponding software implementations by formally defining and systematically investigating the paper–code consistency detection task. We introduce BioCon, the first expert-annotated benchmark dataset for this purpose, and propose a cross-modal consistency detection framework that aligns sentences from papers with code functions through unified input representations, mixed negative sampling, and a fused weighted focal loss within a pretrained language model. This approach effectively mitigates challenges posed by class imbalance and hard negative samples. Evaluated on BioCon, our method achieves an accuracy of 0.9056 and an F1 score of 0.8011, substantially advancing automated reproducibility assessment in computational biology.
📝 Abstract
Ensuring consistency between research papers and their corresponding software implementations is fundamental to software reliability and scientific reproducibility. However, this problem remains underexplored, particularly in the domain of bioinformatics, where discrepancies between methodological descriptions in papers and their actual code implementations are prevalent. To address this gap, this paper introduces a new task, namely paper-code consistency detection, and curates a collection of 48 bioinformatics software projects along with their associated publications. We systematically align sentence-level algorithmic descriptions from papers with function-level code snippets. Combined with expert annotations and a hybrid negative sampling strategy, we construct the first benchmark dataset in the bioinformatics domain tailored to this task, termed BioCon. Based on this benchmark, we further propose a cross-modal consistency detection framework designed to model the semantic relationships between natural language descriptions and code implementations. The framework adopts a unified input representation and leverages pre-trained models to capture deep semantic alignment between papers and code. To mitigate the effects of class imbalance and hard samples, we incorporate a weighted focal loss to enhance model robustness. Experimental results demonstrate that our framework effectively identifies consistency between papers and code in bioinformatics, achieving an accuracy of 0.9056 and an F1 score of 0.8011. Overall, this study opens a new research direction for paper-code consistency analysis and lays the foundation for automated reproducibility assessment and cross-modal understanding in scientific software.