🤖 AI Summary
This work addresses the under-characterized high-order errors exhibited by large language models (LLMs) when analyzing dense, dynamic, long-form texts requiring multi-tool coordination—particularly in domains like cryptocurrency and decentralized finance (DeFi). We introduce CryptoAnalystBench, an analyst-oriented benchmark comprising 198 real-world crypto queries, alongside an agent framework that integrates multi-tool invocation and a multidimensional evaluation protocol. For the first time, we define a taxonomy of seven high-order error categories and assess model performance along dimensions including relevance, timeliness, analytical depth, and data consistency, leveraging human annotations, citation verification, and an enhanced LLM-as-a-Judge mechanism. Our experiments reveal pervasive critical failures even in state-of-the-art models, demonstrating that the proposed methodology effectively identifies high-risk errors and delivers scalable feedback for developers. The full benchmark and toolchain are publicly released.
📝 Abstract
Modern analyst agents must reason over complex, high token inputs, including dozens of retrieved documents, tool outputs, and time sensitive data. While prior work has produced tool calling benchmarks and examined factuality in knowledge augmented systems, relatively little work studies their intersection: settings where LLMs must integrate large volumes of dynamic, structured and unstructured multi tool outputs. We investigate LLM failure modes in this regime using crypto as a representative high data density domain. We introduce (1) CryptoAnalystBench, an analyst aligned benchmark of 198 production crypto and DeFi queries spanning 11 categories; (2) an agentic harness equipped with relevant crypto and DeFi tools to generate responses across multiple frontier LLMs; and (3) an evaluation pipeline with citation verification and an LLM as a judge rubric spanning four user defined success dimensions: relevance, temporal relevance, depth, and data consistency. Using human annotation, we develop a taxonomy of seven higher order error types that are not reliably captured by factuality checks or LLM based quality scoring. We find that these failures persist even in state of the art systems and can compromise high stakes decisions. Based on this taxonomy, we refine the judge rubric to better capture these errors. While the judge does not align with human annotators on precise scoring across rubric iterations, it reliably identifies critical failure modes, enabling scalable feedback for developers and researchers studying analyst style agents. We release CryptoAnalystBench with annotated queries, the evaluation pipeline, judge rubrics, and the error taxonomy, and outline mitigation strategies and open challenges in evaluating long form, multi tool augmented systems.