Inverse Depth Scaling From Most Layers Being Similar

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the differential effects of depth and width on performance in large language models, with a particular focus on how depth influences the loss function. By integrating empirical analysis of large language models with theoretical modeling using simplified residual networks, the authors find that model loss approximately decreases inversely with depth. This behavior arises from an ensemble averaging effect caused by functional similarity across deep layers, rather than from compositional learning or smoothed discrete dynamical systems. The findings reveal an efficiency bottleneck in current residual architectures in leveraging depth, underscoring the urgent need for architectural innovation to fully unlock the potential benefits of increased depth.

Technology Category

Application Category

📝 Abstract
Neural scaling laws relate loss to model size in large language models (LLMs), yet depth and width may contribute to performance differently, requiring more detailed studies. Here, we quantify how depth affects loss via analysis of LLMs and toy residual networks. We find loss scales inversely proportional to depth in LLMs, probably due to functionally similar layers reducing error through ensemble averaging rather than compositional learning or discretizing smooth dynamics. This regime is inefficient yet robust and may arise from the architectural bias of residual networks and target functions incompatible with smooth dynamics. The findings suggest that improving LLM efficiency may require architectural innovations to encourage compositional use of depth.
Problem

Research questions and friction points this paper is trying to address.

neural scaling laws
model depth
large language models
loss scaling
residual networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

inverse depth scaling
layer similarity
ensemble averaging
compositional learning
residual networks
🔎 Similar Papers
No similar papers found.