🤖 AI Summary
This study investigates the differential effects of depth and width on performance in large language models, with a particular focus on how depth influences the loss function. By integrating empirical analysis of large language models with theoretical modeling using simplified residual networks, the authors find that model loss approximately decreases inversely with depth. This behavior arises from an ensemble averaging effect caused by functional similarity across deep layers, rather than from compositional learning or smoothed discrete dynamical systems. The findings reveal an efficiency bottleneck in current residual architectures in leveraging depth, underscoring the urgent need for architectural innovation to fully unlock the potential benefits of increased depth.
📝 Abstract
Neural scaling laws relate loss to model size in large language models (LLMs), yet depth and width may contribute to performance differently, requiring more detailed studies. Here, we quantify how depth affects loss via analysis of LLMs and toy residual networks. We find loss scales inversely proportional to depth in LLMs, probably due to functionally similar layers reducing error through ensemble averaging rather than compositional learning or discretizing smooth dynamics. This regime is inefficient yet robust and may arise from the architectural bias of residual networks and target functions incompatible with smooth dynamics. The findings suggest that improving LLM efficiency may require architectural innovations to encourage compositional use of depth.