The 4th Dimension for Scaling Model Size

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the bottleneck in large language model (LLM) scaling wherein parameter growth is tightly coupled with capability gains. We propose “Virtual Logical Depth” (VLD), a fourth-dimensional scaling paradigm that increases algorithmic depth—i.e., effective reasoning capacity—without increasing total parameter count, achieved via controlled parameter reuse. Through multi-scale controlled experiments, we systematically evaluate VLD’s differential impact on knowledge capacity versus reasoning ability. Results demonstrate that VLD significantly enhances complex reasoning performance (e.g., mathematical and symbolic reasoning) while preserving knowledge memorization fidelity—providing the first empirical evidence of their decoupling. Crucially, this improvement is achieved independently of parameter expansion. Thus, VLD offers a computationally efficient and environmentally sustainable pathway for LLM architecture design, advancing scalable yet parameter-frugal AI systems.

Technology Category

Application Category

📝 Abstract
Scaling the size of large language models typically involves three dimensions: depth, width, and the number of parameters. In this work, we explore a fourth dimension, virtual logical depth (VLD), which increases the effective algorithmic depth without changing the overall parameter count by reusing parameters within the model. Although parameter reuse is not a new concept, its potential and characteristics in model scaling have not been thoroughly studied. Through carefully designed controlled experiments, we make the following key discoveries regarding VLD scaling: VLD scaling forces the knowledge capacity of the model to remain almost constant, with only minor variations. VLD scaling enables a significant improvement in reasoning capability, provided the scaling method is properly implemented. The number of parameters correlates with knowledge capacity, but not with reasoning capability. Under certain conditions, it is not necessary to increase the parameter count to enhance reasoning. These findings are consistent across various model configurations and are likely to be generally valid within the scope of our experiments.
Problem

Research questions and friction points this paper is trying to address.

Exploring virtual logical depth for scaling models without increasing parameters
Studying parameter reuse impact on model knowledge and reasoning
Investigating relationship between parameter count and reasoning capability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces virtual logical depth scaling
Reuses parameters to enhance reasoning
Maintains knowledge capacity constant