Meta Prompting for AI Systems

📅 2023-11-20
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from low prompting efficiency and poor generalization in complex reasoning and dynamic data interaction tasks. Method: We propose Meta Prompting—a novel prompting paradigm that systematically integrates type theory and category theory into prompt engineering, enabling compositional, formally verifiable prompt structures. It supports task decomposition, recursive self-prompting, and metaprogramming-style prompt evolution—without any parameter fine-tuning. Contribution/Results: Our framework establishes the first formal theoretical foundation for prompting, significantly enhancing zero-shot high-order reasoning and open-ended interactive capabilities. Experiments show Qwen-72B achieves 46.3% accuracy on MATH and 83.5% on GSM8K—surpassing same-scale supervised fine-tuned models—while GPT-4 attains 100% success on the Game of 24. These results demonstrate substantial improvements in zero-shot performance across rigorous mathematical and procedural reasoning benchmarks.
📝 Abstract
We introduce Meta Prompting (MP), a prompting paradigm designed to enhance the utilization of large language models (LLMs) and AI systems in complex problem-solving and data interaction. Grounded in type theory and category theory, Meta Prompting prioritizes structural and syntactical considerations over traditional content-centric methods. In this work, we formally define Meta Prompting, delineate its distinctions from few-shot prompting, and demonstrate its effectiveness across various AI applications. In particular, we show that Meta Prompting can decompose intricate reasoning tasks into simpler sub-problems, thereby improving token efficiency and enabling fairer comparisons with conventional few-shot techniques. Furthermore, we extend this framework to prompting tasks, allowing LLMs to recursively self-generate refined prompts in a metaprogramming-like manner. Empirical evaluations reveal that a Qwen-72B base language model equipped with Meta Prompting-without additional instruction tuning-achieves a PASS@1 accuracy of 46.3% on MATH problems, surpassing a supervised fine-tuned counterpart, 83.5% accuracy on GSM8K, and a 100% success rate on Game of 24 tasks using GPT-4. The code is available at https://github.com/meta-prompting/meta-prompting.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM utilization in problem-solving.
Decomposes complex tasks into simpler sub-problems.
Improves token efficiency and prompt refinement.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta Prompting enhances LLMs
Decomposes tasks into sub-problems
Enables recursive self-generation of prompts
🔎 Similar Papers
Y
Yifan Zhang
IIIS, Tsinghua University; Shanghai Artificial Intelligence Laboratory; Shanghai Qizhi Institute
Yang Yuan
Yang Yuan
Tsinghua University
Machine learningOptimization
Andrew Chi-Chih Yao
Andrew Chi-Chih Yao
Tsinghua University
AlgorithmsCryptographyQuantum ComputingArtificial Intelligence