FINEST: Improving LLM Responses to Sensitive Topics Through Fine-Grained Evaluation

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tendency of large language models to produce overly cautious or vague responses on sensitive topics, a challenge inadequately captured by existing evaluation methods that struggle to balance safety and usefulness. The authors propose FINEST, a fine-grained evaluation framework that, for the first time, decomposes errors in sensitive responses into three distinct categories: content, logic, and appropriateness. Building upon this taxonomy, they introduce an interpretable optimization pipeline that leverages category-specific scoring and rationale-based feedback. Experimental results on a Korean dataset of sensitive questions demonstrate that the proposed approach significantly reduces error rates across all categories, with the proportion of appropriateness-related errors decreasing by up to 33.09%, substantially outperforming an unguided refinement baseline.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) often generate overly cautious and vague responses on sensitive topics, sacrificing helpfulness for safety. Existing evaluation frameworks lack systematic methods to identify and address specific weaknesses in responses to sensitive topics, making it difficult to improve both safety and helpfulness simultaneously. To address this, we introduce FINEST, a FINE-grained response evaluation taxonomy for Sensitive Topics, which breaks down helpfulness and harmlessness into errors across three main categories: Content, Logic, and Appropriateness. Experiments on a Korean-sensitive question dataset demonstrate that our score- and error-based improvement pipeline, guided by FINEST, significantly improves the model responses across all three categories, outperforming refinement without guidance. Notably, score-based improvement -- providing category-specific scores and justifications -- yields the most significant gains, reducing the error sentence ratio for Appropriateness by up to 33.09%. This work lays the foundation for a more explainable and comprehensive evaluation and improvement of LLM responses to sensitive questions.
Problem

Research questions and friction points this paper is trying to address.

sensitive topics
LLM responses
helpfulness
safety
evaluation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

fine-grained evaluation
sensitive topics
LLM safety
helpfulness-harmlessness trade-off
response refinement
🔎 Similar Papers
No similar papers found.