🤖 AI Summary
Lightweight large language models (LwLLMs) suffer from weak reasoning capabilities, and existing prompt optimization methods rely either on strong base models or manual intervention—making them ill-suited for LwLLMs’ resource-constrained settings.
Method: We propose Direct Behavior Optimization Paradigm (DeBoP), the first gradient-free optimization framework explicitly targeting LwLLMs’ intrinsic behavioral patterns, eliminating reliance on large models’ metacognitive capabilities. DeBoP pioneers end-to-end automated prompt optimization for LwLLMs via Monte Carlo Tree Search (MCTS), integrating behavioral sequence modeling with a Chain-of-Thought–inspired quantitative representation of execution trajectories.
Contribution/Results: Evaluated on seven challenging reasoning tasks, DeBoP consistently outperforms state-of-the-art prompt optimization methods; in most cases, it surpasses GPT-3.5 in performance while reducing inference latency by approximately 60%.
📝 Abstract
Lightweight Large Language Models (LwLLMs) are reduced-parameter, optimized models designed to run efficiently on consumer-grade hardware, offering significant advantages in resource efficiency, cost-effectiveness, and data privacy. However, these models often struggle with limited inference and reasoning capabilities, which restrict their performance on complex tasks and limit their practical applicability. Moreover, existing prompt optimization methods typically rely on extensive manual effort or the meta-cognitive abilities of state-of-the-art LLMs, making them less effective for LwLLMs. To address these challenges, we introduce DeBoP, a new Direct Behavior Optimization Paradigm, original from the Chain-of-Thought (CoT) prompting technique. Unlike CoT Prompting, DeBoP is an automatic optimization method, which focuses on the optimization directly on the behavior of LwLLMs. In particular, DeBoP transforms the optimization of complex prompts into the optimization of discrete, quantifiable execution sequences using a gradient-free Monte Carlo Tree Search. We evaluate DeBoP on seven challenging tasks where state-of-the-art LLMs excel but LwLLMs generally underperform. Experimental results demonstrate that DeBoP significantly outperforms recent prompt optimization methods on most tasks. In particular, DeBoP-optimized LwLLMs surpass GPT-3.5 on most tasks while reducing computational time by approximately 60% compared to other automatic prompt optimization methods.