🤖 AI Summary
Analyzing terabyte-scale cosmological simulation data is inefficient due to its massive volume, complex structure, and high domain-specific expertise requirements.
Method: This paper proposes a large language model (LLM)-based multi-agent collaborative analysis framework. It adopts a hierarchical supervisor–agent architecture that enables natural-language-driven user intent understanding and query validation, while leveraging data chunking, context-aware reasoning, and domain-knowledge injection to avoid full-data ingestion.
Contribution/Results: The framework uniquely couples multi-agent coordination with intrinsic scientific data characteristics, supporting interactive, scalable, and lightweight analysis. Experiments on the HACC cosmological simulation dataset demonstrate accurate intent parsing and sub-second response times, significantly enhancing exploratory efficiency and usability of large-scale scientific data.
📝 Abstract
Analyzing large-scale scientific datasets presents substantial challenges due to their sheer volume, structural complexity, and the need for specialized domain knowledge. Automation tools, such as PandasAI, typically require full data ingestion and lack context of the full data structure, making them impractical as intelligent data analysis assistants for datasets at the terabyte scale. To overcome these limitations, we propose InferA, a multi-agent system that leverages large language models to enable scalable and efficient scientific data analysis. At the core of the architecture is a supervisor agent that orchestrates a team of specialized agents responsible for distinct phases of the data retrieval and analysis. The system engages interactively with users to elicit their analytical intent and confirm query objectives, ensuring alignment between user goals and system actions. To demonstrate the framework's usability, we evaluate the system using ensemble runs from the HACC cosmology simulation which comprises several terabytes.