🤖 AI Summary
This work investigates the correspondence between conceptual abstraction and network depth in large language models (LLMs), proposing and empirically validating the “conceptual depth” hypothesis: concepts—ranging from concrete facts to abstract sentiment and complex reasoning—are encoded incrementally across increasingly deeper layers, in ascending order of abstraction. Methodologically, we design hierarchical representation probes and conduct systematic analyses across model families (Gemma, LLaMA, Qwen) and task domains (fact retrieval, sentiment analysis, logical reasoning), further assessing representational robustness via controlled noise injection and weight quantization. Key contributions include: (1) the first formal definition and empirical validation of the conceptual depth theory; (2) evidence that complex reasoning critically depends on deep-layer representations, whereas simpler tasks achieve high efficiency in shallow layers; and (3) discovery that external perturbations significantly delay the emergence (“maturation”) of abstract concepts to deeper layers, supporting a hierarchical, progressive acquisition mechanism for conceptual representations.
📝 Abstract
Large language models (LLMs) have shown remarkable performances across a wide range of tasks. However, the mechanisms by which these models encode tasks of varying complexities remain poorly understood. In this paper, we explore the hypothesis that LLMs process concepts of varying complexities in different layers, introducing the idea of ``Concept Depth'' to suggest that more complex concepts are typically acquired in deeper layers. Specifically, we categorize concepts based on their level of abstraction, defining them in the order of increasing complexity within factual, emotional, and inferential tasks. We conduct extensive probing experiments using layer-wise representations across various LLM families (Gemma, LLaMA, Qwen) on various datasets spanning the three domains of tasks. Our findings reveal that models could efficiently conduct probing for simpler tasks in shallow layers, and more complex tasks typically necessitate deeper layers for accurate understanding. Additionally, we examine how external factors, such as adding noise to the input and quantizing the model weights, might affect layer-wise representations. Our findings suggest that these factors can impede the development of a conceptual understanding of LLMs until deeper layers are explored. We hope that our proposed concept and experimental insights will enhance the understanding of the mechanisms underlying LLMs. Our codes are available at url{https://github.com/Luckfort/CD}.