🤖 AI Summary
This study investigates the robustness of large language models (LLMs) to structural interventions—specifically, layer deletion and adjacent layer swapping—which perturb the canonical transformer architecture.
Method: We conduct systematic layer-wise intervention analysis, cross-model comparison, hidden-state trajectory tracking, and quantitative measurement of vocabulary alignment across eight prominent LLMs.
Contribution/Results: Despite severe architectural perturbations, all models retain 72–95% prediction accuracy. Crucially, they consistently exhibit a four-stage reasoning dynamic: tokenization → feature engineering → prediction integration → residual sharpening. This is the first systematic identification of a cross-model, stage-wise structural organization in LLM inference, challenging the “black-box” paradigm. Deeper models demonstrate greater robustness, and the final stage significantly sharpens token probability distributions while suppressing noise. These findings establish a novel theoretical framework for interpretability research and principled model architecture design.
📝 Abstract
We demonstrate and investigate the remarkable robustness of Large Language Models by deleting and swapping adjacent layers. We find that deleting and swapping interventions retain 72-95% of the original model's prediction accuracy without fine-tuning, whereas models with more layers exhibit more robustness. Based on the results of the layer-wise intervention and further experiments, we hypothesize the existence of four universal stages of inference across eight different models: detokenization, feature engineering, prediction ensembling, and residual sharpening. The first stage integrates local information, lifting raw token representations into higher-level contextual representations. Next is the iterative refinement of task and entity-specific features. Then, the second half of the model begins with a phase transition, where hidden representations align more with the vocabulary space due to specialized model components. Finally, the last layer sharpens the following token distribution by eliminating obsolete features that add noise to the prediction.