🤖 AI Summary
This study presents the first systematic empirical investigation of quality bottlenecks in mainstream LLM libraries—Hugging Face Transformers and vLLM—based on 313 defect-fixing commits and 7,748 test functions. To characterize defects, we propose the first dual-dimensional taxonomy (5 symptom categories and 14 root-cause categories). Our analysis reveals that API misuse is the predominant defect type (32.17%–48.19%), signaling a paradigmatic shift in LLM library defects from algorithmic layers to interface layers. Further investigation into test failures identifies insufficient test coverage (41.73%), absence of test-driven development (32.37%), and weak assertions (25.90%) as the primary contributing factors. Collectively, this work establishes a foundational theoretical framework and actionable guidelines for quality assurance in LLM infrastructure.
📝 Abstract
Large Language Model (LLM) libraries have emerged as the foundational infrastructure powering today's AI revolution, serving as the backbone for LLM deployment, inference optimization, fine-tuning, and production serving across diverse applications. Despite their critical role in the LLM ecosystem, these libraries face frequent quality issues and bugs that threaten the reliability of AI systems built upon them. To address this knowledge gap, we present the first comprehensive empirical investigation into bug characteristics and testing practices in modern LLM libraries. We examine 313 bug-fixing commits extracted across two widely-adopted LLM libraries: HuggingFace Transformers and vLLM.Through rigorous manual analysis, we establish comprehensive taxonomies categorizing bug symptoms into 5 types and root causes into 14 distinct categories.Our primary discovery shows that API misuse has emerged as the predominant root cause (32.17%-48.19%), representing a notable transition from algorithm-focused defects in conventional deep learning frameworks toward interface-oriented problems. Additionally, we examine 7,748 test functions to identify 7 distinct test oracle categories employed in current testing approaches, with predefined expected outputs (such as specific tensors and text strings) being the most common strategy. Our assessment of existing testing effectiveness demonstrates that the majority of bugs escape detection due to inadequate test cases (41.73%), lack of test drivers (32.37%), and weak test oracles (25.90%). Drawing from these findings, we offer some recommendations for enhancing LLM library quality assurance.