🤖 AI Summary
This work addresses the bottleneck in large language model (LLM) scaling wherein parameter growth is tightly coupled with capability gains. We propose “Virtual Logical Depth” (VLD), a fourth-dimensional scaling paradigm that increases algorithmic depth—i.e., effective reasoning capacity—without increasing total parameter count, achieved via controlled parameter reuse. Through multi-scale controlled experiments, we systematically evaluate VLD’s differential impact on knowledge capacity versus reasoning ability. Results demonstrate that VLD significantly enhances complex reasoning performance (e.g., mathematical and symbolic reasoning) while preserving knowledge memorization fidelity—providing the first empirical evidence of their decoupling. Crucially, this improvement is achieved independently of parameter expansion. Thus, VLD offers a computationally efficient and environmentally sustainable pathway for LLM architecture design, advancing scalable yet parameter-frugal AI systems.
📝 Abstract
Scaling the size of large language models typically involves three dimensions: depth, width, and the number of parameters. In this work, we explore a fourth dimension, virtual logical depth (VLD), which increases the effective algorithmic depth without changing the overall parameter count by reusing parameters within the model. Although parameter reuse is not a new concept, its potential and characteristics in model scaling have not been thoroughly studied. Through carefully designed controlled experiments, we make the following key discoveries regarding VLD scaling:
VLD scaling forces the knowledge capacity of the model to remain almost constant, with only minor variations.
VLD scaling enables a significant improvement in reasoning capability, provided the scaling method is properly implemented.
The number of parameters correlates with knowledge capacity, but not with reasoning capability. Under certain conditions, it is not necessary to increase the parameter count to enhance reasoning.
These findings are consistent across various model configurations and are likely to be generally valid within the scope of our experiments.