🤖 AI Summary
This paper addresses the severe performance instability of large language models (LLMs) in zero-shot text classification, caused by prompt fragility. We propose Placeholding Parallel Prediction (P3), a novel inference-time method that departs from conventional single-step next-token prediction. Instead, P3 simultaneously predicts tokens at multiple positions—effectively simulating the full generation path—and performs path-level probability modeling and aggregation directly over internal logits, without fine-tuning or additional parameters. Empirically, P3 substantially enhances prompt robustness: across multiple benchmarks, it reduces inter-prompt performance standard deviation by up to 98%. Notably, it achieves zero-shot classification *without any handcrafted prompts*, attaining accuracy competitive with strong prompted baselines. The core innovation lies in abandoning the sequential token-generation assumption in favor of global, path-level modeling of the output distribution—establishing a more stable and generalizable paradigm for zero-shot classification.
📝 Abstract
Zero-shot text classification typically relies on prompt engineering, but the inherent prompt brittleness of large language models undermines its reliability. Minor changes in prompt can cause significant discrepancies in model performance. We attribute this prompt brittleness largely to the narrow focus on nexttoken probabilities in existing methods. To address this, we propose Placeholding Parallel Prediction (P3), a novel approach that predicts token probabilities across multiple positions and simulates comprehensive sampling of generation paths in a single run of a language model. Experiments show improved accuracy and up to 98% reduction in the standard deviation across prompts, boosting robustness. Even without a prompt, P3 maintains comparable performance, reducing the need for prompt engineering.