Dual-Path Stable Soft Prompt Generation for Domain Generalization

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In domain generalization, soft prompt generation suffers from high inter-seed variability—i.e., substantially different and suboptimal prompts are produced for identical inputs under varying random seeds. To address this instability, we propose a dual-path stable prompt generation framework. Its core innovation is the first introduction of a negative learning mechanism to construct a complementary prompt generator, theoretically proven to enlarge the effective margin and bound the gradient norm from above. The framework jointly leverages forward prompt generation and negative prompt constraints, integrating contrastive prompt optimization with stability regularization. Evaluated on five domain generalization benchmarks, our method consistently surpasses state-of-the-art approaches: inter-seed prompt variance decreases by 42%, average accuracy improves by 2.3–5.1 percentage points, and prompt robustness and generalization stability are significantly enhanced.

Technology Category

Application Category

📝 Abstract
Domain generalization (DG) aims to learn a model using data from one or multiple related but distinct source domains that can generalize well to unseen out-of-distribution target domains. Inspired by the success of large pre-trained vision-language models (VLMs), prompt tuning has emerged as an effective generalization strategy. However, it often struggles to capture domain-specific features due to its reliance on manually or fixed prompt inputs. Recently, some prompt generation methods have addressed this limitation by dynamically generating instance-specific and domain-specific prompts for each input, enriching domain information and demonstrating potential for enhanced generalization. Through further investigation, we identify a notable issue in existing prompt generation methods: the same input often yields significantly different and suboptimal prompts across different random seeds, a phenomenon we term Prompt Variability. To address this, we introduce negative learning into the prompt generation process and propose Dual-Path Stable Soft Prompt Generation (DPSPG), a transformer-based framework designed to improve both the stability and generalization of prompts. Specifically, DPSPG incorporates a complementary prompt generator to produce negative prompts, thereby reducing the risk of introducing misleading information. Both theoretical and empirical analyses demonstrate that negative learning leads to more robust and effective prompts by increasing the effective margin and reducing the upper bound of the gradient norm. Extensive experiments on five DG benchmark datasets show that DPSPG consistently outperforms state-of-the-art methods while maintaining prompt stability.
Problem

Research questions and friction points this paper is trying to address.

Addressing prompt variability in domain generalization methods
Enhancing stability and generalization of dynamically generated prompts
Reducing misleading information in prompt generation via negative learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Path Stable Soft Prompt Generation framework
Negative learning for robust prompt generation
Transformer-based complementary prompt generator
🔎 Similar Papers
No similar papers found.