🤖 AI Summary
The insurance domain has long lacked a specialized large language model (LLM) evaluation benchmark; existing general-purpose models exhibit significant limitations in actuarial reasoning and regulatory compliance, while domain-specific models suffer from insufficient business adaptability and compliance robustness. Method: We introduce CUFEInse v1.0—the first multidimensional insurance-specific evaluation benchmark—comprising five dimensions: domain expertise, industry understanding, safety and compliance, intelligent agent capabilities, and logical rigor. It includes 54 fine-grained metrics and 14,430 high-quality questions, underpinned by a novel “quantification-oriented, expert-driven, multi-stakeholder-validated” evaluation paradigm integrating structured knowledge modeling, iterative expert verification, and mixed qualitative–quantitative analysis. Contribution/Results: Comprehensive evaluation of 11 state-of-the-art models reveals systematic weaknesses in underwriting/claims reasoning and compliant document generation. Empirical results further validate the efficacy—and delineate the limits—of domain-adaptive fine-tuning.
📝 Abstract
This paper comprehensively elaborates on the construction methodology, multi-dimensional evaluation system, and underlying design philosophy of CUFEInse v1.0. Adhering to the principles of"quantitative-oriented, expert-driven, and multi-validation,"the benchmark establishes an evaluation framework covering 5 core dimensions, 54 sub-indicators, and 14,430 high-quality questions, encompassing insurance theoretical knowledge, industry understanding, safety and compliance, intelligent agent application, and logical rigor. Based on this benchmark, a comprehensive evaluation was conducted on 11 mainstream large language models. The evaluation results reveal that general-purpose models suffer from common bottlenecks such as weak actuarial capabilities and inadequate compliance adaptation. High-quality domain-specific training demonstrates significant advantages in insurance vertical scenarios but exhibits shortcomings in business adaptation and compliance. The evaluation also accurately identifies the common bottlenecks of current large models in professional scenarios such as insurance actuarial, underwriting and claim settlement reasoning, and compliant marketing copywriting. The establishment of CUFEInse not only fills the gap in professional evaluation benchmarks for the insurance field, providing academia and industry with a professional, systematic, and authoritative evaluation tool, but also its construction concept and methodology offer important references for the evaluation paradigm of large models in vertical fields, serving as an authoritative reference for academic model optimization and industrial model selection. Finally, the paper looks forward to the future iteration direction of the evaluation benchmark and the core development direction of"domain adaptation + reasoning enhancement"for insurance large models.