Prompt and Parameter Co-Optimization for Large Language Models

๐Ÿ“… 2025-09-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing research typically treats prompt optimization and parameter fine-tuning as disjoint paradigms, overlooking their synergistic potential. To address this, we propose MetaTunerโ€”a unified framework that jointly models prompt learning and parameter adaptation. MetaTuner employs a dual-network architecture with a shared bottom-layer encoder to enable simultaneous explicit prompt generation and implicit parameter updating. To mitigate optimization challenges arising from coupling discrete prompt tokens with continuous model parameters, we introduce a supervised regularization loss. Extensive experiments across multiple benchmark tasks demonstrate that MetaTuner consistently outperforms pure prompt engineering, standard fine-tuning, and existing hybrid approaches. Results validate the effectiveness, robustness, and generalizability of co-optimizing prompts and parameters, establishing a new paradigm for efficient and adaptive language model adaptation.

Technology Category

Application Category

๐Ÿ“ Abstract
Prompt optimization and fine-tuning are two major approaches to improve the performance of Large Language Models (LLMs). They enhance the capabilities of LLMs from complementary perspectives: the former through explicit natural language, and the latter through implicit parameter updates. However, prior work has typically studied them in isolation, leaving their synergistic potential largely underexplored. To bridge this gap, in this paper, we introduce MetaTuner, a novel framework that jointly integrates prompt optimization and fine-tuning for LLM training. Specifically, we introduce two neural networks to generate prompts and parameters, respectively, while allowing them to share a common bottom encoding layer to enable knowledge sharing. By the guidance of the final supervised signals, our framework is optimized to discover the optimal combinations between the prompts and parameters. Given that prompt learning involves discrete optimization while fine-tuning operates in a continuous parameter space, we design a supervised regularization loss to train our framework effectively. Extensive experiments across diverse benchmarks show that our method consistently outperforms the baselines.
Problem

Research questions and friction points this paper is trying to address.

Jointly optimizing prompts and parameters for large language models
Bridging isolated approaches of prompt optimization and fine-tuning
Discovering optimal combinations between discrete prompts and continuous parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly optimizes prompt generation and parameter fine-tuning
Uses shared encoding layer for knowledge transfer
Employs supervised regularization for discrete-continuous optimization
๐Ÿ”Ž Similar Papers
No similar papers found.
Xiaohe Bo
Xiaohe Bo
Gaoling School of Artificial Intelligence, Renmin University of China
large language models
R
Rui Li
Gaoling School of Artificial Intelligence, Renmin University of China
Zexu Sun
Zexu Sun
Renmin University of China
Causal inferenceReinforcement learningLarge language model
Q
Quanyu Dai
Huawei Noahโ€™s Ark Lab
Z
Zeyu Zhang
Gaoling School of Artificial Intelligence, Renmin University of China
Zihang Tian
Zihang Tian
Doctor at Gaoling School of AI
LLM-Based Agent
X
Xu Chen
Gaoling School of Artificial Intelligence, Renmin University of China
Zhenhua Dong
Zhenhua Dong
Noah's ark lab, Huawei Technologies Co., Ltd.
Recommender systemcausal inferencecountrfactual learningtrustworthy AImachine learning