๐ค AI Summary
This work proposes two novel approaches grounded in pragmatic inference to enhance the moral sensitivity of large language models, enabling them to effectively identify and correct moral errors in input texts. Departing from conventional methods that rely on surface-level semantic diversity, this study adopts a unified pragmatic reasoning framework that models moral judgments by inferring speakersโ underlying intentions and implicit presuppositions. By integrating targeted model fine-tuning with carefully designed reasoning mechanisms, the proposed methods achieve significant performance gains over existing approaches across multiple morality-related benchmarks. The results demonstrate both the effectiveness and innovativeness of the framework in strengthening the modelโs capacity for moral diagnosis and rectification.
๐ Abstract
Moral sensitivity is fundamental to human moral competence, as it guides individuals in regulating everyday behavior. Although many approaches seek to align large language models (LLMs) with human moral values, how to enable them morally sensitive has been extremely challenging. In this paper, we take a step toward answering the question: how can we enhance moral sensitivity in LLMs? Specifically, we propose two pragmatic inference methods that faciliate LLMs to diagnose morally benign and hazardous input and correct moral errors, whereby enhancing LLMs'moral sensitivity. A central strength of our pragmatic inference methods is their unified perspective: instead of modeling moral discourses across semantically diverse and complex surface forms, they offer a principled perspective for designing pragmatic inference procedures grounded in their inferential loads. Empirical evidence demonstrates that our pragmatic methods can enhance moral sensitivity in LLMs and achieves strong performance on representative morality-relevant benchmarks.