LatentPrompt: Optimizing Promts in Latent Space

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prompt engineering for large language models (LLMs) heavily relies on manual heuristics and lacks systematic, automated optimization. Method: This paper proposes the first model-agnostic prompt optimization framework operating in a continuous latent semantic space. It maps discrete prompts into embeddings and employs gradient-free, black-box optimization guided by automated evaluation metrics—requiring no manual rules or task-specific design. Initialization with only a seed prompt enables end-to-end automated prompt generation and iterative refinement. Contribution/Results: On financial sentiment classification, a single optimization round improves accuracy by approximately three percentage points, demonstrating effectiveness, generality, and plug-and-play usability. Crucially, this work pioneers the shift of prompt optimization from discrete token space to differentiable latent space, establishing a novel paradigm for automated prompt engineering.

Technology Category

Application Category

📝 Abstract
Recent advances have shown that optimizing prompts for Large Language Models (LLMs) can significantly improve task performance, yet many optimization techniques rely on heuristics or manual exploration. We present LatentPrompt, a model-agnostic framework for prompt optimization that leverages latent semantic space to automatically generate, evaluate, and refine candidate prompts without requiring hand-crafted rules. Beginning with a set of seed prompts, our method embeds them in a continuous latent space and systematically explores this space to identify prompts that maximize task-specific performance. In a proof-of-concept study on the Financial PhraseBank sentiment classification benchmark, LatentPrompt increased classification accuracy by approximately 3 percent after a single optimization cycle. The framework is broadly applicable, requiring only black-box access to an LLM and an automatic evaluation metric, making it suitable for diverse domains and tasks.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts in latent space for LLMs
Automating prompt generation without manual rules
Improving task performance via systematic prompt exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes prompts in latent semantic space
Automatically generates and refines candidate prompts
Requires only black-box LLM access
🔎 Similar Papers
No similar papers found.
M
Mateusz Bystroński
Wrocław University of Science and Technology
G
Grzegorz Piotrowski
Wrocław University of Science and Technology
N
Nitesh V. Chawla
University of Notre Dame
Tomasz Kajdanowicz
Tomasz Kajdanowicz
Wroclaw University of Technology
Data ScienceMachine LearningRepresentation Learning