🤖 AI Summary
Current AI systems exhibit low interpretability and credibility in ethical evaluation of human behavior due to the absence of explicit modeling of social norms. This paper addresses the moral trade-off challenge arising from coexisting conflicting social norms (e.g., courage vs. self-preservation) by proposing ClarityEthic—the first framework to formalize normative conflict as a core mechanism in ethical reasoning. Methodologically, it integrates large language model–driven norm generation, social norm embedding, contrastive learning–enhanced moral attribution, and multi-dimensional value orientation prediction. ClarityEthic achieves significant improvements over strong baselines across multiple ethical evaluation benchmarks. Human evaluations confirm that the generated norms are highly plausible and explanatory, substantially enhancing decision transparency and model trustworthiness.
📝 Abstract
Human behaviors are often guided or constrained by social norms, which are defined as shared, commonsense rules. For example, underlying an action `` extit{report a witnessed crime}" are social norms that inform our conduct, such as `` extit{It is expected to be brave to report crimes}''. Current AI systems that assess valence (i.e., support or oppose) of human actions by leveraging large-scale data training not grounded on explicit norms may be difficult to explain, and thus untrustworthy. Emulating human assessors by considering social norms can help AI models better understand and predict valence. While multiple norms come into play, conflicting norms can create tension and directly influence human behavior. For example, when deciding whether to `` extit{report a witnessed crime}'', one may balance extit{bravery} against extit{self-protection}. In this paper, we introduce extit{ClarityEthic}, a novel ethical assessment approach, to enhance valence prediction and explanation by generating conflicting social norms behind human actions, which strengthens the moral reasoning capabilities of language models by using a contrastive learning strategy. Extensive experiments demonstrate that our method outperforms strong baseline approaches, and human evaluations confirm that the generated social norms provide plausible explanations for the assessment of human behaviors.