Bridging Context Gaps: Enhancing Comprehension in Long-Form Social Conversations Through Contextualized Excerpts

📅 2024-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In long-term social dialogues, shared dialogue snippets often lack sufficient social context, hindering non-participants’ comprehension of topic evolution and interpersonal nuances. Method: We propose the first context-enhancement paradigm for social dialogue, leveraging large language models (LLMs) to explicitly model sociocognitive elements—including role relationships, affective tone, and shared experiences—to enhance cross-scenario snippet understanding. Contribution/Results: We introduce HSE, the first high-quality, human-annotated dataset for social dialogue context modeling, which reveals systematic biases in LLMs’ social-context reasoning. We further propose targeted fine-tuning and prompt-engineering strategies to mitigate these biases. Dual-axis (subjective and objective) evaluation shows a 23.6% improvement in subjective snippet comprehension scores, an 18.4% gain in downstream task accuracy, and more focused, comprehensive summarization. The HSE dataset is publicly released to advance socially aware AI research.

Technology Category

Application Category

📝 Abstract
We focus on enhancing comprehension in small-group recorded conversations, which serve as a medium to bring people together and provide a space for sharing personal stories and experiences on crucial social matters. One way to parse and convey information from these conversations is by sharing highlighted excerpts in subsequent conversations. This can help promote a collective understanding of relevant issues, by highlighting perspectives and experiences to other groups of people who might otherwise be unfamiliar with and thus unable to relate to these experiences. The primary challenge that arises then is that excerpts taken from one conversation and shared in another setting might be missing crucial context or key elements that were previously introduced in the original conversation. This problem is exacerbated when conversations become lengthier and richer in themes and shared experiences. To address this, we explore how Large Language Models (LLMs) can enrich these excerpts by providing socially relevant context. We present approaches for effective contextualization to improve comprehension, readability, and empathy. We show significant improvements in understanding, as assessed through subjective and objective evaluations. While LLMs can offer valuable context, they struggle with capturing key social aspects. We release the Human-annotated Salient Excerpts (HSE) dataset to support future work. Additionally, we show how context-enriched excerpts can provide more focused and comprehensive conversation summaries.
Problem

Research questions and friction points this paper is trying to address.

Context Understanding
Long Conversations
Background Information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Context Enhancement
Social Understanding Dataset
🔎 Similar Papers
No similar papers found.