Generating Search Explanations using Large Language Models

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates, for the first time, the capability of large language models (LLMs) to generate aspect-oriented search explanations (AOSE)—concise, interpretable rationales that enhance users’ comprehension efficiency and information localization speed. To address the low factual accuracy and poor user acceptability of conventional explanation methods, we comparatively evaluate two dominant LLM architectures—encoder-decoder models (e.g., T5) and decoder-only models (e.g., LLaMA, ChatGLM)—on the AOSE task. Experimental results demonstrate that the best-performing LLMs significantly outperform multiple baselines—including retrieval-augmented and rule-based approaches—in factual accuracy, logical coherence, and user acceptability. Our key contributions are: (1) formalizing and empirically validating AOSE as a novel search explanation paradigm; (2) identifying LLM architecture choice as a critical determinant of explanation quality; and (3) introducing the first LLM-oriented benchmark specifically designed for search explanation generation.

Technology Category

Application Category

📝 Abstract
Aspect-oriented explanations in search results are typically concise text snippets placed alongside retrieved documents to serve as explanations that assist users in efficiently locating relevant information. While Large Language Models (LLMs) have demonstrated exceptional performance for a range of problems, their potential to generate explanations for search results has not been explored. This study addresses that gap by leveraging both encoder-decoder and decoder-only LLMs to generate explanations for search results. The explanations generated are consistently more accurate and plausible explanations than those produced by a range of baseline models.
Problem

Research questions and friction points this paper is trying to address.

Exploring LLMs for generating search result explanations
Comparing encoder-decoder and decoder-only LLM performance
Improving explanation accuracy over baseline models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging encoder-decoder LLMs for explanations
Utilizing decoder-only LLMs for search explanations
Generating more accurate plausible search explanations
🔎 Similar Papers
No similar papers found.