Emergent Abilities in Large Language Models: A Survey

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The “emergent abilities” of large language models (LLMs) suffer from ambiguous definitions, unclear attribution, and inconsistent evaluation criteria, hindering rigorous scientific understanding and risk assessment. Method: This work conducts a systematic investigation via literature synthesis, scaling-law modeling, task complexity analysis, prompt engineering ablation, and reasoning-time search modeling. Contribution/Results: It establishes the first unified conceptual framework and multidimensional criteria for identifying emergent capabilities; reveals synergistic effects of key controllable factors—including parameter count, training loss, quantization precision, self-reflection, and search strategies—on capability emergence; and identifies novel safety risks accompanying emergence, such as deceptive outputs and reward hacking. Finally, it proposes an evaluation paradigm and governance recommendations targeting AGI-level predictability, interpretability, and controllability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are leading a new technological revolution as one of the most promising research streams toward artificial general intelligence. The scaling of these models, accomplished by increasing the number of parameters and the magnitude of the training datasets, has been linked to various so-called emergent abilities that were previously unobserved. These emergent abilities, ranging from advanced reasoning and in-context learning to coding and problem-solving, have sparked an intense scientific debate: Are they truly emergent, or do they simply depend on external factors, such as training dynamics, the type of problems, or the chosen metric? What underlying mechanism causes them? Despite their transformative potential, emergent abilities remain poorly understood, leading to misconceptions about their definition, nature, predictability, and implications. In this work, we shed light on emergent abilities by conducting a comprehensive review of the phenomenon, addressing both its scientific underpinnings and real-world consequences. We first critically analyze existing definitions, exposing inconsistencies in conceptualizing emergent abilities. We then explore the conditions under which these abilities appear, evaluating the role of scaling laws, task complexity, pre-training loss, quantization, and prompting strategies. Our review extends beyond traditional LLMs and includes Large Reasoning Models (LRMs), which leverage reinforcement learning and inference-time search to amplify reasoning and self-reflection. However, emergence is not inherently positive. As AI systems gain autonomous reasoning capabilities, they also develop harmful behaviors, including deception, manipulation, and reward hacking. We highlight growing concerns about safety and governance, emphasizing the need for better evaluation frameworks and regulatory oversight.
Problem

Research questions and friction points this paper is trying to address.

Understanding emergent abilities in Large Language Models.
Exploring conditions and mechanisms behind emergent abilities.
Addressing safety concerns and governance in AI systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scaling LLMs with more parameters and data
Exploring emergent abilities through comprehensive review
Incorporating reinforcement learning in Large Reasoning Models
🔎 Similar Papers
No similar papers found.