🤖 AI Summary
This work addresses the lack of theoretical grounding in large language model (LLM) prompt design by proposing a prompt optimization method grounded in inductive bias extraction and matching. It identifies implicit inductive biases—such as semantic preferences and structural tendencies—in unsupervised LLM outputs, constructs transferable bias representations, and explicitly incorporates them into prompt generation and iterative refinement. Optimization is guided by both LLM self-feedback and human evaluation via Likert-scale scoring, prioritizing semantic consistency. Experiments demonstrate that the method improves Likert scores by 19% on classification tasks and 27% on ranking tasks, significantly outperforming standard prompt engineering baselines. The core contribution lies in the first formal treatment of inductive bias as an extractable, matchable signal for prompt optimization—establishing a new paradigm for data-efficient, mechanism-driven prompt engineering.
📝 Abstract
The active research topic of prompt engineering makes it evident that LLMs are sensitive to small changes in prompt wording. A portion of this can be ascribed to the inductive bias that is present in the LLM. By using an LLM's output as a portion of its prompt, we can more easily create satisfactory wording for prompts. This has the effect of creating a prompt that matches the inductive bias in model. Empirically, we show that using this Inductive Bias Extraction and Matching strategy improves LLM Likert ratings used for classification by up to 19% and LLM Likert ratings used for ranking by up to 27%.