Reading with Intent -- Neutralizing Intent

📅 2025-01-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Retrieval-augmented generation (RAG) systems suffer performance degradation in downstream tasks due to affective diversity—e.g., sarcasm, excitement—in internet-sourced texts. Method: We propose an emotion-neutralization preprocessing framework tailored for reading comprehension, comprising (i) a high-quality synthetic dataset covering 11 distinct emotions, and (ii) a trainable emotion translation model enabling controllable, task-oriented neutralization—the first such approach for reading comprehension. Contribution/Results: We introduce the first emotion-controllable rewriting paradigm explicitly optimized for reading comprehension. Extensive evaluation—including LLM fine-tuning and human assessment—demonstrates a ~3% average improvement in overall task performance and substantial mitigation of bias from sarcastic or emotionally charged contexts. Human evaluation further confirms high fidelity in emotion transformation and strong semantic consistency post-neutralization.

Technology Category

Application Category

📝 Abstract
Queries to large language models (LLMs) can be divided into two parts: the instruction/question and the accompanying context. The context for retrieval-augmented generation (RAG) systems in most benchmarks comes from Wikipedia or Wikipedia-like texts which are written in a neutral and factual tone. However, when RAG systems retrieve internet-based content, they encounter text with diverse tones and linguistic styles, introducing challenges for downstream tasks. The Reading with Intent task addresses this issue by evaluating how varying tones in context passages affect model performance. Building on prior work that focused on sarcasm, we extend this paradigm by constructing a dataset where context passages are transformed to $11$ distinct emotions using a better synthetic data generation approach. Using this dataset, we train an emotion translation model to systematically adapt passages to specified emotional tones. The human evaluation shows that the LLM fine-tuned to become the emotion-translator benefited from the synthetically generated data. Finally, the emotion-translator is used in the Reading with Intent task to transform the passages to a neutral tone. By neutralizing the passages, it mitigates the challenges posed by sarcastic passages and improves overall results on this task by about $3%$.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Tone Processing
Bias Mitigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Emotional Paragraph Dataset
Sentiment Transformation Model
Intentional Reading Task
🔎 Similar Papers
No similar papers found.