🤖 AI Summary
To address the reliance on manually crafted examples, poor generalizability, and high inference cost in few-shot/zero-shot text classification, this paper proposes a principle-induction-based multi-agent prompting framework. Multiple LLM agents collaboratively extract, validate, and aggregate transferable classification principles from minimal or zero labeled data; a dedicated classification agent then performs inference using these principles. Our approach eliminates manual example and principle design, introducing the first closed-loop paradigm of “principle generation → aggregation → application.” Evaluated across diverse domain datasets, it achieves macro-F1 improvements of 1.55–19.37 percentage points over strong baselines—including chain-of-thought and backtrack prompting—while incurring lower inference overhead than standard few-shot prompting and demonstrating superior cross-task generalization.
📝 Abstract
We present PRINCIPLE-BASED PROMPTING, a simple but effective multi-agent prompting strategy for text classification. It first asks multiple LLM agents to independently generate candidate principles based on analysis of demonstration samples with or without labels, consolidates them into final principles via a finalizer agent, and then sends them to a classifier agent to perform downstream classification tasks. Extensive experiments on binary and multi-class classification datasets with different sizes of LLMs show that our approach not only achieves substantial performance gains (1.55% - 19.37%) over zero-shot prompting on macro-F1 score but also outperforms other strong baselines (CoT and stepback prompting). Principles generated by our approach help LLMs perform better on classification tasks than human crafted principles on two private datasets. Our multi-agent PRINCIPLE-BASED PROMPTING approach also shows on-par or better performance compared to demonstration-based few-shot prompting approaches, yet with substantially lower inference costs. Ablation studies show that label information and the multi-agent cooperative LLM framework play an important role in generating high-quality principles to facilitate downstream classification tasks.