EduBot -- Can LLMs Solve Personalized Learning and Programming Assignments?

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the capability of large language models (LLMs) in personalized learning and multi-stage programming tasks—including recursive prompting and iterative error correction. To this end, we propose EduBot, the first fine-tuning-free, recursion-aware prompting framework that unifies conceptual instruction, end-to-end code generation, and lightweight debugging, while supporting task-adaptive decomposition and dynamic difficulty progression. Our method leverages pretrained LLMs through multi-turn prompt engineering, chain-of-decomposition reasoning, and dynamic feedback-based refinement. We conduct systematic evaluation across 20 diverse benchmarks spanning algorithms, machine learning, and real-world scenarios. Results show that over 90% of tasks are completed within 20 minutes, with strong compatibility and robustness across mainstream LLMs. The core contribution is the first empirical validation that pure prompt engineering—without parameter adaptation—can effectively realize a complex, personalized programming learning loop.

Technology Category

Application Category

📝 Abstract
The prevalence of Large Language Models (LLMs) is revolutionizing the process of writing code. General and code LLMs have shown impressive performance in generating standalone functions and code-completion tasks with one-shot queries. However, the ability to solve comprehensive programming tasks with recursive requests and bug fixes remains questionable. In this paper, we propose EduBot, an intelligent automated assistant system that combines conceptual knowledge teaching, end-to-end code development, personalized programming through recursive prompt-driven methods, and debugging with limited human interventions powered by LLMs. We show that EduBot can solve complicated programming tasks consisting of sub-tasks with increasing difficulties ranging from conceptual to coding questions by recursive automatic prompt-driven systems without finetuning on LLMs themselves. To further evaluate EduBot's performance, we design and conduct a benchmark suite consisting of 20 scenarios in algorithms, machine learning, and real-world problems. The result shows that EduBot can complete most scenarios in less than 20 minutes. Based on the benchmark suites, we perform a comparative study to take different LLMs as the backbone and to verify EduBot's compatibility and robustness across LLMs with varying capabilities. We believe that EduBot is an exploratory approach to explore the potential of pre-trained LLMs in multi-step reasoning and code generation for solving personalized assignments with knowledge learning and code generation.
Problem

Research questions and friction points this paper is trying to address.

Can LLMs solve personalized learning and programming assignments recursively?
How effective is EduBot in multi-step coding tasks without fine-tuning?
Does EduBot perform robustly across different LLM backbones?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recursive prompt-driven personalized programming assistant
End-to-end code development with debugging
Multi-step reasoning without LLM fine-tuning
🔎 Similar Papers
No similar papers found.