PromptBridge: Cross-Model Prompt Transfer for Large Language Models

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Frequent switching among large language models (LLMs) causes significant degradation in prompt performance—termed “model drift”—due to inconsistent model behaviors and latent representation shifts. Method: We propose PromptBridge, a training-free cross-model prompt transfer framework that learns the mapping between source and target model prompts via minimal calibration tasks. Its core innovation is the Model-Adaptive Reflective Prompt Evolution (MAP-RPE) mechanism, which systematically identifies model drift and automatically constructs optimal prompt pairs alongside a quantitative cross-model mapping function. Contribution/Results: PromptBridge enables plug-and-play prompt transfer in both single- and multi-agent settings without fine-tuning or re-optimization. Experiments across diverse downstream tasks demonstrate substantial accuracy recovery after model switching, while reducing prompt migration cost and engineering overhead by over 90%. This work provides the first systematic characterization of model drift and establishes a principled, efficient approach to robust cross-model prompt adaptation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) underpin applications in code generation, mathematical reasoning, and agent-based workflows. In practice, systems access LLMs via commercial APIs or open-source deployments, and the model landscape (e.g., GPT, Claude, Llama) evolves rapidly. This rapid evolution forces frequent model switches driven by capability, cost, deployment constraints, and privacy. Yet prompts are highly model-sensitive: reusing a prompt engineered for one model on another often yields substantially worse performance than a prompt optimized for the target model. We term this phenomenon Model Drifting. Through extensive empirical analysis across diverse LLM configurations, we show that model drifting is both common and severe. To address this challenge, we introduce PromptBridge, a training-free framework that preserves prompt effectiveness under model switches, enabling cross-model prompt transfer without costly per-task or per-model re-optimization. PromptBridge requires only a small set of alignment tasks for calibration. It first applies Model-Adaptive Reflective Prompt Evolution (MAP-RPE) to obtain task- and model-specific optimal prompts via iterative reflective refinement and quantitative evaluation. Using the resulting calibrated prompt pairs for the source and target models, PromptBridge learns a cross-model prompt mapping. At test time, i.e., for an unseen task, given a source-model prompt, this mapping directly produces an optimized prompt for the target model. Experiments in single-agent and multi-agent settings show that PromptBridge consistently improves downstream accuracy while reducing migration effort. The code will be available soon.
Problem

Research questions and friction points this paper is trying to address.

Addresses model drifting in LLM prompt reuse
Enables cross-model prompt transfer without re-optimization
Preserves prompt effectiveness across different model switches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free framework for cross-model prompt transfer
Model-Adaptive Reflective Prompt Evolution for optimal prompts
Learns cross-model prompt mapping using calibrated prompt pairs
🔎 Similar Papers
No similar papers found.
Yaxuan Wang
Yaxuan Wang
PhD Student of Computer Science, University of California, Santa Curz
machine learning
Q
Quan Liu
Center for Advanced AI, Accenture
Zhenting Wang
Zhenting Wang
Accenture; Rutgers University
Z
Zichao Li
Center for Advanced AI, Accenture
W
Wei Wei
Center for Advanced AI, Accenture
Y
Yang Liu
University of California, Santa Cruz
Yujia Bao
Yujia Bao
Massachusetts Institute of Technology
Machine LearningNatural Language Processing