ExPerT: Effective and Explainable Evaluation of Personalized Long-Form Text Generation

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reliable and reusable human evaluation remains a critical bottleneck in personalized long-text generation. Method: This paper introduces the first fine-grained, interpretable automatic evaluation framework specifically designed for personalized generation. It leverages large language models (LLMs) to extract atomic semantic and stylistic elements, enabling cross-text joint alignment along both content and writing style dimensions; it further generates structured, stepwise natural-language explanations via evidence localization. Contribution/Results: By innovatively integrating writing-style modeling and evidence-level alignment into the automatic evaluation pipeline—while ensuring end-to-end human interpretability—the framework achieves a 7.2% improvement in correlation with human judgments over prior state-of-the-art methods, and attains an explanation usability score of 4.7/5 (5-point scale), significantly enhancing evaluation transparency and practical utility.

Technology Category

Application Category

📝 Abstract
Evaluating personalized text generated by large language models (LLMs) is challenging, as only the LLM user, i.e., prompt author, can reliably assess the output, but re-engaging the same individuals across studies is infeasible. This paper addresses the challenge of evaluating personalized text generation by introducing ExPerT, an explainable reference-based evaluation framework. ExPerT leverages an LLM to extract atomic aspects and their evidence from the generated and reference texts, match the aspects, and evaluate their alignment based on content and writing style -- two key attributes in personalized text generation. Additionally, ExPerT generates detailed, fine-grained explanations for every step of the evaluation process, enhancing transparency and interpretability. Our experiments demonstrate that ExPerT achieves a 7.2% relative improvement in alignment with human judgments compared to the state-of-the-art text generation evaluation methods. Furthermore, human evaluators rated the usability of ExPerT's explanations at 4.7 out of 5, highlighting its effectiveness in making evaluation decisions more interpretable.
Problem

Research questions and friction points this paper is trying to address.

Language Model Evaluation
Text Quality Assessment
Human Judgment Alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

ExPerT
Automated Evaluation
Personalized Text Assessment
🔎 Similar Papers
No similar papers found.