🤖 AI Summary
This study systematically investigates, for the first time, the capability of large language models (LLMs) to generate aspect-oriented search explanations (AOSE)—concise, interpretable rationales that enhance users’ comprehension efficiency and information localization speed. To address the low factual accuracy and poor user acceptability of conventional explanation methods, we comparatively evaluate two dominant LLM architectures—encoder-decoder models (e.g., T5) and decoder-only models (e.g., LLaMA, ChatGLM)—on the AOSE task. Experimental results demonstrate that the best-performing LLMs significantly outperform multiple baselines—including retrieval-augmented and rule-based approaches—in factual accuracy, logical coherence, and user acceptability. Our key contributions are: (1) formalizing and empirically validating AOSE as a novel search explanation paradigm; (2) identifying LLM architecture choice as a critical determinant of explanation quality; and (3) introducing the first LLM-oriented benchmark specifically designed for search explanation generation.
📝 Abstract
Aspect-oriented explanations in search results are typically concise text snippets placed alongside retrieved documents to serve as explanations that assist users in efficiently locating relevant information. While Large Language Models (LLMs) have demonstrated exceptional performance for a range of problems, their potential to generate explanations for search results has not been explored. This study addresses that gap by leveraging both encoder-decoder and decoder-only LLMs to generate explanations for search results. The explanations generated are consistently more accurate and plausible explanations than those produced by a range of baseline models.