🤖 AI Summary
It is commonly assumed that small-scale large language models (LLMs) lack the capacity for moral self-correction without gradient-based fine-tuning.
Method: This work investigates whether a 3.8B-parameter LLM can achieve moral self-correction solely from natural language feedback—without parameter updates—by introducing a synergistic framework combining lightweight safety-aligned fine-tuning and refined chain-of-thought (CoT) prompting, alongside a three-tier evaluation suite assessing social norm understanding, bias identification, and moral correction capability.
Contribution/Results: We provide the first empirical evidence that, after minimal safety alignment, the 3.8B model achieves moral self-correction performance comparable to models ≥7B parameters. Its limitations stem not from inherent incapacity but from insufficient social norm modeling and CoT self-explanation fidelity. Moreover, all model sizes exhibit significant sensitivity to implicit values embedded in instructions, underscoring the critical importance of input-side alignment. These findings challenge prevailing assumptions about scale-dependent moral reasoning and highlight architectural and prompting levers for enhancing ethical behavior in compact LLMs.
📝 Abstract
Self-correction is one of the most amazing emerging capabilities of Large Language Models (LLMs), enabling LLMs to self-modify an inappropriate output given a natural language feedback which describes the problems of that output. Moral self-correction is a post-hoc approach correcting unethical generations without requiring a gradient update, making it both computationally lightweight and capable of preserving the language modeling ability. Previous works have shown that LLMs can self-debias, and it has been reported that small models, i.e., those with less than 22B parameters, are not capable of moral self-correction. However, there is no direct proof as to why such smaller models fall short of moral self-correction, though previous research hypothesizes that larger models are skilled in following instructions and understanding abstract social norms. In this paper, we empirically validate this hypothesis in the context of social stereotyping, through meticulous prompting. Our experimental results indicate that (i) surprisingly, 3.8B LLMs with proper safety alignment fine-tuning can achieve very good moral self-correction performance, highlighting the significant effects of safety alignment; and (ii) small LLMs are indeed weaker than larger-scale models in terms of comprehending social norms and self-explanation through CoT, but all scales of LLMs show bad self-correction performance given unethical instructions.