AnswerCarefully: A Dataset for Improving the Safety of Japanese LLM Output

πŸ“… 2025-06-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Japanese large language models (LLMs) exhibit insufficient safety response capabilities in high-risk scenarios, and no culturally grounded, Japan-specific safety evaluation benchmark exists. Method: We construct the first Japan-contextualized, 1,800-pair high-risk QA dataset covering violence, discrimination, privacy, and other safety-critical domains. Annotation employs a bilingual (Japanese/English), human-curated, culture-adapted strategy to ensure linguistic and sociocultural fidelity, enabling scalable safety evaluation. Safety-enhanced instruction tuning is applied to verify that safety improvements preserve general-purpose capabilities. Contributions/Results: (1) We introduce the first native Japanese safety evaluation benchmark; (2) We conduct a systematic safety assessment of 12 mainstream Japanese LLMs; (3) We empirically validate cross-lingual alignment annotation as an effective approach for multilingual safety data co-construction. Experiments demonstrate significant improvements in model risk identification accuracy and appropriate response generation.

Technology Category

Application Category

πŸ“ Abstract
In this paper we present AnswerCarefully, a dataset for promoting the safety and appropriateness of Japanese LLM outputs. The dataset consists of 1,800 pairs of questions and reference answers, where the questions require special attention in answering. It covers a wide range of risk categories established in prior English-language datasets, but the data samples are original in that they are manually created to reflect the socio-cultural context of LLM usage in Japan. We show that using this dataset for instruction to fine-tune a Japanese LLM led to improved output safety without compromising the utility of general responses. We also report the results of a safety evaluation of 12 Japanese LLMs using this dataset as a benchmark. Finally, we describe the latest update on the dataset which provides English translations and annotations of the questions, aimed at facilitating the derivation of similar datasets in different languages and regions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing safety of Japanese LLM outputs via dataset
Addressing socio-cultural risks in Japanese LLM responses
Evaluating and benchmarking safety of Japanese LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Manually created Japanese socio-cultural context dataset
Fine-tuning LLM for improved safety and utility
English translations for cross-language dataset derivation
πŸ”Ž Similar Papers
No similar papers found.