DxHF: Providing High-Quality Human Feedback for LLM Alignment via Interactive Decomposition

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM alignment interfaces require users to directly compare lengthy texts, imposing high cognitive load and making preference judgments susceptible to uncertainty. To address this, we propose an argument-decomposition-based feedback optimization framework: long texts are automatically decomposed into independent semantic units (arguments), logical dependencies among arguments are explicitly modeled, and a visual interactive interface is designed to support fine-grained, argument-level preference annotation. This transforms abstract text evaluation into structured, traceable, argument-level judgments—substantially reducing cognitive burden. Experiments demonstrate a 5.0% absolute improvement in overall feedback accuracy, with especially pronounced gains under high-uncertainty conditions. Although per-feedback latency increases by 18 seconds, error rate decreases by 23%, yielding a significantly improved feedback quality–efficiency trade-off. Our work establishes a novel paradigm for eliciting trustworthy, interpretable human feedback in LLM alignment.

Technology Category

Application Category

📝 Abstract
Human preferences are widely used to align large language models (LLMs) through methods such as reinforcement learning from human feedback (RLHF). However, the current user interfaces require annotators to compare text paragraphs, which is cognitively challenging when the texts are long or unfamiliar. This paper contributes by studying the decomposition principle as an approach to improving the quality of human feedback for LLM alignment. This approach breaks down the text into individual claims instead of directly comparing two long-form text responses. Based on the principle, we build a novel user interface DxHF. It enhances the comparison process by showing decomposed claims, visually encoding the relevance of claims to the conversation and linking similar claims. This allows users to skim through key information and identify differences for better and quicker judgment. Our technical evaluation shows evidence that decomposition generally improves feedback accuracy regarding the ground truth, particularly for users with uncertainty. A crowdsourcing study with 160 participants indicates that using DxHF improves feedback accuracy by an average of 5%, although it increases the average feedback time by 18 seconds. Notably, accuracy is significantly higher in situations where users have less certainty. The finding of the study highlights the potential of HCI as an effective method for improving human-AI alignment.
Problem

Research questions and friction points this paper is trying to address.

Improving human feedback quality for LLM alignment
Reducing cognitive load in comparing long text paragraphs
Enhancing feedback accuracy via interactive decomposition interface
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes text into individual claims
Visual encoding for claim relevance
Links similar claims for comparison
🔎 Similar Papers