SWI: Speaking with Intent in Large Language Models

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often lack explicit intent representation and high-level planning during reasoning, leading to suboptimal coherence, interpretability, and factual consistency. Method: We propose “Speaking with Intent” (SWI), the first approach to formally integrate cognitive-scientific notions of *intent* into LLM inference—requiring models to explicitly generate interpretable, high-level intents prior to multi-step reasoning and text generation, serving as anchoring guidance for subsequent steps. SWI is implemented via prompt engineering, requiring no fine-tuning or additional training and remaining compatible with mainstream LLM architectures. Contribution/Results: Extensive experiments demonstrate SWI’s significant gains over baselines—including Chain-of-Thought and Plan-and-Solve—across mathematical reasoning, question answering, and summarization tasks. Notably, it improves summary factuality and reduces hallucination. Human evaluation confirms that SWI-generated intents are coherent, effective, and highly interpretable, establishing a novel cognition-informed paradigm for reasoning enhancement.

Technology Category

Application Category

📝 Abstract
Intent, typically clearly formulated and planned, functions as a cognitive framework for reasoning and problem-solving. This paper introduces the concept of Speaking with Intent (SWI) in large language models (LLMs), where the explicitly generated intent encapsulates the model's underlying intention and provides high-level planning to guide subsequent analysis and communication. By emulating deliberate and purposeful thoughts in the human mind, SWI is hypothesized to enhance the reasoning capabilities and generation quality of LLMs. Extensive experiments on mathematical reasoning benchmarks consistently demonstrate the superiority of Speaking with Intent over Baseline (i.e., generation without explicit intent). Moreover, SWI outperforms answer-trigger prompting methods Chain-of-Thought and Plan-and-Solve and maintains competitive performance with the strong method ARR (Analyzing, Retrieving, and Reasoning). Additionally, the effectiveness and generalizability of SWI are solidified on reasoning-intensive question answering (QA) and text summarization benchmarks, where SWI brings consistent improvement to the Baseline generation. In text summarization, SWI-generated summaries exhibit greater accuracy, conciseness, and factual correctness, with fewer hallucinations. Furthermore, human evaluations verify the coherence, effectiveness, and interpretability of the intent produced by SWI. This proof-of-concept study creates a novel avenue for enhancing LLMs' reasoning abilities with cognitive notions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM reasoning via explicit intent generation
Improving generation quality in QA and summarization
Reducing hallucinations in text summarization outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicit intent generation guides LLM reasoning
SWI enhances accuracy and reduces hallucinations
Human-like cognitive framework improves generation quality
🔎 Similar Papers
No similar papers found.