🤖 AI Summary
This study investigates whether large language models (LLMs) can effectively acquire moral reasoning capabilities through existing learning paradigms. We identify a critical limitation: poor generalization in ethical judgment, and introduce the novel concept of the “pragmatic dilemma”—a fundamental misalignment between the pragmatic features of moral discourse (e.g., context dependence, intention inference, value trade-offs) and current semantics-centric training approaches. Grounded in distributional semantics theory, we develop a diagnostic framework integrating pragmatic analysis with empirical evaluation to systematically assess diverse fine-tuning and prompting strategies. Results demonstrate that state-of-the-art methods improve only surface-level consistency, failing to support robust, generalizable moral reasoning. Our work uncovers a key bottleneck in ethical alignment and establishes both theoretical foundations and methodological pathways for developing pragmatic-aware moral learning paradigms.
📝 Abstract
Ensuring that Large Language Models (LLMs) return just responses which adhere to societal values is crucial for their broader application. Prior research has shown that LLMs often fail to perform satisfactorily on tasks requiring moral cognizance, such as ethics-based judgments. While current approaches have focused on fine-tuning LLMs with curated datasets to improve their capabilities on such tasks, choosing the optimal learning paradigm to enhance the ethical responses of LLMs remains an open research debate. In this work, we aim to address this fundamental question: can current learning paradigms enable LLMs to acquire sufficient moral reasoning capabilities? Drawing from distributional semantics theory and the pragmatic nature of moral discourse, our analysis indicates that performance improvements follow a mechanism similar to that of semantic-level tasks, and therefore remain affected by the pragmatic nature of morals latent in discourse, a phenomenon we name the pragmatic dilemma. We conclude that this pragmatic dilemma imposes significant limitations on the generalization ability of current learning paradigms, making it the primary bottleneck for moral reasoning acquisition in LLMs.