BloomWise: Enhancing Problem-Solving capabilities of Large Language Models using Bloom's-Taxonomy-Inspired Prompts

📅 2024-10-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit weak interpretability and insufficient higher-order cognitive capabilities in multi-step mathematical reasoning. Method: This paper proposes a hierarchical prompting framework grounded in Bloom’s Taxonomy of Educational Objectives, explicitly modeling the six cognitive levels—remembering, understanding, applying, analyzing, evaluating, and creating—as sequential, controllable prompting stages. It introduces an LLM self-assessment mechanism to dynamically identify cognitive transition points, enabling adaptive evolution of reasoning paths. The framework integrates hierarchical prompt engineering, cognitive-stage modeling, and multi-stage reasoning chain coordination. Contribution/Results: Our approach significantly outperforms standard prompting, chain-of-thought (CoT), and related baselines across four mainstream mathematical reasoning benchmarks. Ablation studies confirm the efficacy of each cognitive module and their synergistic gains. To our knowledge, this is the first work to systematically incorporate Bloom’s Taxonomy into prompt design, establishing a novel paradigm for enhancing both the mathematical reasoning capability and interpretability of LLMs.

Technology Category

Application Category

📝 Abstract
Despite the continuous progress of Large Language Models (LLMs) across various tasks, their performance on mathematical problems and reasoning tasks remains limited. This limitation can be attributed, among other factors, to the inherent difficulty of these problems and the fact that solutions often consist of multiple steps, potentially of varying nature, making it challenging for a single prompting technique to execute all required steps. To address this, we introduce BloomWise, a new prompting technique, inspired by Bloom's Taxonomy, aiming to improve LLMs' performance in solving such problems by encouraging them to approach the problem starting from simple, i.e., remembering, and progressing to higher cognitive skills, i.e., analyzing, until the correct solution is reached. The decision regarding the need to employ more sophisticated cognitive skills is based on self-evaluation performed by the LLM. Thus, we encourage the LLM to deploy the appropriate cognitive processes. In extensive experiments across 4 popular math reasoning datasets, we have demonstrated the effectiveness of our proposed approach. We also present extensive ablations, analyzing the strengths of each module within our system.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' mathematical reasoning via cognitive prompts
Improving explainability of LLM solutions using Bloom's Taxonomy
Iterative cognitive-level progression for solution convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bloom's-Taxonomy-Inspired Prompts for LLMs
Iterative cognitive operation progression
Early convergence criterion for efficiency
M
Maria-Eleni Zoumpoulidi
School of Electrical and Computer Engineering, National Technical University of Athens, Greece
Georgios Paraskevopoulos
Georgios Paraskevopoulos
Associate Researcher, Institute for Speech and Language Processing, Athena RC
Multimodal ProcessingDeep LearningNLPDomain adaptation
A
A. Potamianos
School of Electrical and Computer Engineering, National Technical University of Athens, Greece