🤖 AI Summary
A systematic, data-driven understanding of the relationship between large language model (LLM) architectural configurations and performance remains lacking.
Method: This project introduces the first large-scale, open-source LLM architecture–performance benchmark dataset and proposes a data-driven quantification framework integrating multi-benchmark evaluation, statistical modeling, and mechanistic interpretability techniques to perform attribution analysis on key architectural parameters—including number of layers, attention heads, and feed-forward network dimensions.
Contribution/Results: Experiments reveal significant, nonlinear causal effects of specific architectural choices on downstream task performance. The project releases a fully reproducible dataset and analytical toolkit, uncovering empirical patterns in architectural evolution. These resources enable accurate performance prediction and efficient model design, establishing a novel paradigm for LLM interpretability and controllable optimization.
📝 Abstract
Large language models (LLMs) have achieved remarkable success across various domains, driving significant technological advancements and innovations. Despite the rapid growth in model scale and capability, systematic, data-driven research on how structural configurations affect performance remains scarce. To address this gap, we present a large-scale dataset encompassing diverse open-source LLM structures and their performance across multiple benchmarks. Leveraging this dataset, we conduct a systematic, data mining-driven analysis to validate and quantify the relationship between structural configurations and performance. Our study begins with a review of the historical development of LLMs and an exploration of potential future trends. We then analyze how various structural choices impact performance across benchmarks and further corroborate our findings using mechanistic interpretability techniques. By providing data-driven insights into LLM optimization, our work aims to guide the targeted development and application of future models. We will release our dataset at https://huggingface.co/datasets/DX0369/LLM-Structure-Performance-Dataset