Rethinking Semantic Parsing for Large Language Models: Enhancing LLM Performance with Semantic Hints

📅 2024-09-22
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical failure mode of semantic parsing in large language models (LLMs): directly injecting structured logical forms degrades performance—contrary to their consistent benefit in smaller models like BERT. To address this, the authors propose SENSE, a fine-tuning-free, plug-and-play semantic prompting method that embeds lightweight, abstracted semantic hints into input prompts. By avoiding explicit structural output constraints, SENSE mitigates interference with LLMs’ internal reasoning pathways. The approach integrates three core components: abstraction of logical forms, task-adaptive template design, and prompt-based semantic embedding. Evaluated across six benchmarks—including mathematical reasoning, commonsense question answering, and semantic parsing—SENSE achieves an average accuracy improvement of 3.2%, significantly outperforming standard chain-of-thought (CoT) prompting and structured parsing injection baselines.

Technology Category

Application Category

📝 Abstract
Semantic Parsing aims to capture the meaning of a sentence and convert it into a logical, structured form. Previous studies show that semantic parsing enhances the performance of smaller models (e.g., BERT) on downstream tasks. However, it remains unclear whether the improvements extend similarly to LLMs. In this paper, our empirical findings reveal that, unlike smaller models, directly adding semantic parsing results into LLMs reduces their performance. To overcome this, we propose SENSE, a novel prompting approach that embeds semantic hints within the prompt. Experiments show that SENSE consistently improves LLMs' performance across various tasks, highlighting the potential of integrating semantic information to improve LLM capabilities.
Problem

Research questions and friction points this paper is trying to address.

Investigates if semantic parsing improves LLM performance like smaller models
Finds direct semantic parsing harms LLMs, unlike smaller models
Proposes SENSE method using semantic hints to boost LLM performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes SENSE for semantic hint embedding
Embeds semantic hints within prompts
Improves LLM performance across tasks
🔎 Similar Papers
No similar papers found.
Kaikai An
Kaikai An
Peking University
Natural Language Processing
Shuzheng Si
Shuzheng Si
Tsinghua University
Natural Language ProcessingLarge Language Models
H
Helan Hu
National Key Laboratory for Multimedia Information Processing, Peking University; School of Software and Microelectronics, Peking University
H
Haozhe Zhao
National Key Laboratory for Multimedia Information Processing, Peking University; School of Software and Microelectronics, Peking University
Yuchi Wang
Yuchi Wang
CUHK MMLab; Peking Uninversity
MultimodalityVLMGenerative Models
Qingyan Guo
Qingyan Guo
Tsinghua University
B
Baobao Chang
National Key Laboratory for Multimedia Information Processing, Peking University