🤖 AI Summary
The mechanism by which multilayer perceptron (MLP) parameters in large language models (LLMs) store and utilize knowledge remains poorly understood.
Method: We propose and empirically validate the “parameter specialization” hypothesis—that improved model capability arises from increasing functional specialization of MLP parameters in encoding semantically similar knowledge, thereby enhancing knowledge retrieval and reasoning efficiency. We introduce the first micro-level quantification of knowledge distribution across parameters, combining multi-model analysis, causal intervention training, and cross-model comparative experiments.
Contribution/Results: Across 20 open-source LLMs, we demonstrate a statistically significant positive correlation between parameter specialization degree and knowledge competence. Causal training further confirms that actively inducing specialization markedly improves knowledge utilization efficiency. This work uncovers a novel knowledge organization principle in LLMs, providing both theoretical foundations and practical methodologies for model compression, knowledge editing, and interpretability research.
📝 Abstract
Over time, a growing wave of large language models from various series has been introduced to the community. Researchers are striving to maximize the performance of language models with constrained parameter sizes. However, from a microscopic perspective, there has been limited research on how to better store knowledge in model parameters, particularly within MLPs, to enable more effective utilization of this knowledge by the model. In this work, we analyze twenty publicly available open-source large language models to investigate the relationship between their strong performance and the way knowledge is stored in their corresponding MLP parameters. Our findings reveal that as language models become more advanced and demonstrate stronger knowledge capabilities, their parameters exhibit increased specialization. Specifically, parameters in the MLPs tend to be more focused on encoding similar types of knowledge. We experimentally validate that this specialized distribution of knowledge contributes to improving the efficiency of knowledge utilization in these models. Furthermore, by conducting causal training experiments, we confirm that this specialized knowledge distribution plays a critical role in improving the model's efficiency in leveraging stored knowledge.