Operationalising the Superficial Alignment Hypothesis via Task Complexity

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the ambiguity surrounding the shallow alignment hypothesis by proposing a formal measure of task complexity: the minimal program length required to achieve a target level of performance. Interpreting the hypothesis through this lens, the study demonstrates that pretraining dramatically reduces the intrinsic complexity of tasks—bringing it down to the scale of a few kilobytes—while subsequent post-training further compresses the effective program length by several orders of magnitude to attain equivalent performance. Empirical analyses across diverse tasks, including mathematical reasoning, machine translation, and instruction following, substantiate this framework. The results provide a quantifiable and comparable theoretical foundation for understanding how pretraining and post-training jointly facilitate shallow alignment by drastically lowering the descriptive complexity needed for task execution.

Technology Category

Application Category

📝 Abstract
The superficial alignment hypothesis (SAH) posits that large language models learn most of their knowledge during pre-training, and that post-training merely surfaces this knowledge. The SAH, however, lacks a precise definition, which has led to (i) different and seemingly orthogonal arguments supporting it, and (ii) important critiques to it. We propose a new metric called task complexity: the length of the shortest program that achieves a target performance on a task. In this framework, the SAH simply claims that pre-trained models drastically reduce the complexity of achieving high performance on many tasks. Our definition unifies prior arguments supporting the SAH, interpreting them as different strategies to find such short programs. Experimentally, we estimate the task complexity of mathematical reasoning, machine translation, and instruction following; we then show that these complexities can be remarkably low when conditioned on a pre-trained model. Further, we find that pre-training enables access to strong performances on our tasks, but it can require programs of gigabytes of length to access them. Post-training, on the other hand, collapses the complexity of reaching this same performance by several orders of magnitude. Overall, our results highlight that task adaptation often requires surprisingly little information -- often just a few kilobytes.
Problem

Research questions and friction points this paper is trying to address.

Superficial Alignment Hypothesis
task complexity
pre-training
post-training
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

task complexity
superficial alignment hypothesis
pre-training
post-training
program length
🔎 Similar Papers
No similar papers found.