🤖 AI Summary
This work addresses the lack of systematic, automated evaluation for large language models’ (LLMs) hierarchical comprehension of book-length texts. We propose HAMLET, a novel framework featuring a “root–branch–leaf” three-tiered factual hierarchy, integrated with query-guided summarization and semantic consistency–based automatic scoring to form an end-to-end evaluation pipeline. Experiments demonstrate >90% agreement between automated scores and human expert judgments, with a 25× reduction in evaluation cost. HAMLET reveals, for the first time, a pronounced “middle-level forgetting” effect—where LLMs exhibit degraded performance on branch-level abstractions—and a persistent bottleneck in leaf-level fine-grained understanding; analytical queries prove significantly more challenging than narrative ones. Furthermore, performance gaps across open- versus closed-source models and across model scales remain consistent, enabling robust comparative analysis.
📝 Abstract
We introduce HAMLET, a holistic and automated framework for evaluating the long-context comprehension of large language models (LLMs). HAMLET structures source texts into a three-level key-fact hierarchy at root-, branch-, and leaf-levels, and employs query-focused summarization to evaluate how well models recall and faithfully represent information at each level. To validate the reliability of our fully automated pipeline, we conduct a systematic human study, showing that our automatic evaluation achieves over 90% agreement with expert human judgments, while reducing the cost by up to 25 times. HAMLET reveals that LLMs struggle with fine-grained comprehension, especially at the leaf level, and are sensitive to positional effects like the lost-in-the-middle. Analytical queries pose greater challenges than narrative ones, and consistent performance gaps emerge between open-source and proprietary models, as well as across model scales. Our code and dataset are publicly available at https://github.com/DISL-Lab/HAMLET.