🤖 AI Summary
This work addresses the notable gap in current language model editing techniques regarding alignment with human moral judgments, which often fail to accommodate diverse ethical values. Focusing for the first time on the editability of moral reasoning, the study introduces CounterMoral, a benchmark dataset that integrates multiple editing methods within a multidimensional ethical evaluation framework. Through systematic assessment across distinct moral systems, the research reveals significant limitations of existing approaches in value alignment tasks. By establishing the first structured foundation for evaluating and developing controllable, ethically aligned language models, this work fills a critical void in the field and advances the pursuit of models that responsibly reflect human moral diversity.
📝 Abstract
Recent advancements in language model technology have significantly enhanced the ability to edit factual information. Yet, the modification of moral judgments, a crucial aspect of aligning models with human values, has garnered less attention. In this work, we introduce CounterMoral, a benchmark dataset crafted to assess how well current model editing techniques modify moral judgments across diverse ethical frameworks. We apply various editing techniques to multiple language models and evaluate their performance. Our findings contribute to the evaluation of language models designed to be ethical.