🤖 AI Summary
This study systematically evaluates the zero-shot transfer performance of 47 open-source Hugging Face context-based question answering (CBQA) models across eight heterogeneous datasets to identify models that generalize robustly across tasks without domain-specific fine-tuning.
Method: A unified evaluation framework is employed, integrating multi-model predictions via a genetic algorithm to enhance accuracy; correlations between model performance and contextual length, complexity, and answer length are rigorously analyzed.
Contribution/Results: The work uncovers novel interactions among architectural choices (e.g., ELECTRA vs. BERT), pretraining objectives, and domain adaptability. Results show that *ahotrod/electra_large_discriminator_squad2_512* achieves a mean accuracy of 43% overall and 96% on biomedical datasets, while *BERT-large* attains 82% on the IELTS dataset. These findings provide empirical guidance for deploying lightweight, plug-and-play CBQA systems and offer actionable model selection criteria grounded in cross-dataset generalization behavior.
📝 Abstract
Context-based question answering (CBQA) models provide more accurate and relevant answers by considering the contextual information. They effectively extract specific information given a context, making them functional in various applications involving user support, information retrieval, and educational platforms. In this manuscript, we benchmarked the performance of 47 CBQA models from Hugging Face on eight different datasets. This study aims to identify the best-performing model across diverse datasets without additional fine-tuning. It is valuable for practical applications where the need to retrain models for specific datasets is minimized, streamlining the implementation of these models in various contexts. The best-performing models were trained on the SQuAD v2 or SQuAD v1 datasets. The best-performing model was ahotrod/electra_large_discriminator_squad2_512, which yielded 43% accuracy across all datasets. We observed that the computation time of all models depends on the context length and the model size. The model's performance usually decreases with an increase in the answer length. Moreover, the model's performance depends on the context complexity. We also used the Genetic algorithm to improve the overall accuracy by integrating responses from other models. ahotrod/electra_large_discriminator_squad2_512 generated the best results for bioasq10b-factoid (65.92%), biomedical_cpgQA (96.45%), QuAC (11.13%), and Question Answer Dataset (41.6%). Bert-large-uncased-whole-word-masking-finetuned-squad achieved an accuracy of 82% on the IELTS dataset.