🤖 AI Summary
This work investigates the existence, quantifiability, and functional mechanisms of “surface-level knowledge” in large language model (LLM) alignment. Specifically, it addresses whether alignment relies on knowledge that requires no deep causal reasoning—i.e., knowledge accessible via lightweight token remapping alone. We formally define surface-level knowledge and propose a knowledge isolation framework comprising token-selection analysis, shallow-weight perturbation modeling, and attribution decomposition. Our analysis reveals a dual-alignment structure: surface-level knowledge predominantly governs safety and detoxification tasks, whereas deep knowledge underpins complex reasoning. Empirically, surface-level knowledge accounts for a substantial proportion of alignment performance, enables efficient cross-model transfer, and—critically—allows full recovery of alignment capability even when model weights are corrupted. These findings establish surface-level knowledge as a distinct, measurable, and functionally critical component of LLM alignment.
📝 Abstract
Alignment of large language models (LLMs) with human values and preferences, often achieved through fine-tuning based on human feedback, is essential for ensuring safe and responsible AI behaviors. However, the process typically requires substantial data and computation resources. Recent studies have revealed that alignment might be attainable at lower costs through simpler methods, such as in-context learning. This leads to the question: Is alignment predominantly superficial? In this paper, we delve into this question and provide a quantitative analysis. We formalize the concept of superficial knowledge, defining it as knowledge that can be acquired through easily token restyling, without affecting the model's ability to capture underlying causal relationships between tokens. We propose a method to extract and isolate superficial knowledge from aligned models, focusing on the shallow modifications to the final token selection process. By comparing models augmented only with superficial knowledge to fully aligned models, we quantify the superficial portion of alignment. Our findings reveal that while superficial knowledge constitutes a significant portion of alignment, particularly in safety and detoxification tasks, it is not the whole story. Tasks requiring reasoning and contextual understanding still rely on deeper knowledge. Additionally, we demonstrate two practical advantages of isolated superficial knowledge: (1) it can be transferred between models, enabling efficient offsite alignment of larger models using extracted superficial knowledge from smaller models, and (2) it is recoverable, allowing for the restoration of alignment in compromised models without sacrificing performance.