Leveraging Information Retrieval to Enhance Spoken Language Understanding Prompts in Few-Shot Learning

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalization of few-shot spoken language understanding (SLU) models in low-resource settings—caused by severe scarcity of labeled data—this paper proposes a dynamic exemplar selection method leveraging lightweight lexical information retrieval (IR), specifically BM25 and TF-IDF, to construct high-quality few-shot prompts. This is the first work to integrate lexical IR into SLU prompt engineering, enabling semantic alignment and task-relevance optimization of in-context examples without increasing prompt length or computational overhead. Integrated with instruction-tuned large language models, our approach achieves substantial improvements across multiple SLU benchmarks: average relative error rates decrease by 18.7% for both intent classification and slot filling. It overcomes key limitations of conventional manual or random exemplar selection, offering an efficient, scalable, and resource-light paradigm for low-resource spoken language understanding.

Technology Category

Application Category

📝 Abstract
Understanding user queries is fundamental in many applications, such as home assistants, booking systems, or recommendations. Accordingly, it is crucial to develop accurate Spoken Language Understanding (SLU) approaches to ensure the reliability of the considered system. Current State-of-the-Art SLU techniques rely on large amounts of training data; however, only limited annotated examples are available for specific tasks or languages. In the meantime, instruction-tuned large language models (LLMs) have shown exceptional performance on unseen tasks in a few-shot setting when provided with adequate prompts. In this work, we propose to explore example selection by leveraging Information retrieval (IR) approaches to build an enhanced prompt that is applied to an SLU task. We evaluate the effectiveness of the proposed method on several SLU benchmarks. Experimental results show that lexical IR methods significantly enhance performance without increasing prompt length.
Problem

Research questions and friction points this paper is trying to address.

Enhancing SLU prompts with IR in few-shot learning
Addressing limited annotated data for specific SLU tasks
Improving SLU accuracy without increasing prompt length
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging IR to enhance SLU prompts
Using lexical IR for few-shot learning
Improving SLU without longer prompts
🔎 Similar Papers
No similar papers found.