Evaluating LLMs in the Context of a Functional Programming Course: A Comprehensive Study

📅 2026-02-15
🏛️ The Art, Science, and Engineering of Programming
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study evaluates the effectiveness of large language models (LLMs) in educational contexts involving OCaml, a low-resource functional programming language, with a focus on three core tasks: assignment generation, error repair, and conceptual explanation. To this end, we introduce the first multi-dimensional educational benchmark for OCaml—comprising λCodeGen, λRepair, and λExplain—and systematically assess nine state-of-the-art LLMs using a combination of compiler-driven automated scoring and human evaluation. Our results indicate that the top-performing models excel in error repair and basic question answering but exhibit substantially weaker performance in code generation compared to their capabilities in high-resource languages such as Python or Java. This work establishes the first education-oriented LLM evaluation framework for low-resource functional languages, revealing both the promise and current limitations of LLMs in specialized programming education.

Technology Category

Application Category

📝 Abstract
Large-Language Models (LLMs) are changing the way learners acquire knowledge outside the classroom setting. Previous studies have shown that LLMs seem effective in generating to short and simple questions in introductory CS courses using high-resource programming languages such as Java or Python. In this paper, we evaluate the effectiveness of LLMs in the context of a low-resource programming language -- OCaml, in an educational setting. In particular, we built three benchmarks to comprehensively evaluate 9 state-of-the-art LLMs: 1) $\lambda$CodeGen (a benchmark containing natural-language homework programming problems); 2) $\lambda$Repair (a benchmark containing programs with syntax, type, and logical errors drawn from actual student submissions); 3) $\lambda$Explain (a benchmark containing natural language questions regarding theoretical programming concepts). We grade each LLMs responses with respect to correctness using the OCaml compiler and an autograder. And our evaluation goes beyond common evaluation methodology by using manual grading to assess the quality of the responses. Our study shows that the top three LLMs are effective on all tasks within a typical functional programming course, although they solve much fewer homework problems in the low-resource setting compared to their success on introductory programming problems in Python and Java. The strength of LLMs lies in correcting syntax and type errors as well as generating answers to basic conceptual questions. While LLMs may not yet match dedicated language-specific tools in some areas, their convenience as a one-stop tool for multiple programming languages can outweigh the benefits of more specialized systems. We hope our benchmarks can serve multiple purposes: to assess the evolving capabilities of LLMs, to help instructors raise awareness among students about the limitations of LLM-generated solutions, and to inform programming language researchers about opportunities to integrate domain-specific reasoning into LLMs and develop more powerful code synthesis and repair tools for low-resource languages.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Functional Programming
OCaml
Low-resource Languages
Educational Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

low-resource programming language
functional programming education
LLM evaluation benchmark
code repair
manual grading
🔎 Similar Papers
No similar papers found.