Perish or Flourish? A Holistic Evaluation of Large Language Models for Code Generation in Functional Programming

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of large language models’ code generation capabilities in functional programming languages, where the correctness, idiomatic style, and maintainability of generated code remain unclear. The authors propose FPEval, the first multidimensional evaluation framework tailored for functional programming, leveraging the FPBench benchmark comprising 721 tasks and combining test-based verification with static analysis to comprehensively assess mainstream models—including GPT-3.5, GPT-4o, and GPT-5—across Haskell, OCaml, and Scala. The study reveals that model performance is significantly weaker in purely functional languages compared to hybrid or imperative ones, often producing non-idiomatic code that deviates from functional paradigms. However, incorporating static analysis feedback effectively guides models to self-correct, thereby enhancing both correctness and overall code quality.

Technology Category

Application Category

📝 Abstract
Functional programming provides strong foundations for developing reliable and secure software systems, yet its adoption remains not widespread due to the steep learning curve. Recent advances in Large Language Models (LLMs) for code generation present new opportunities to lower these barriers. However, extensive evaluations of LLMs largely focus on imperative programming languages, and their capabilities in functional programming languages (FP) remain underexplored. To address this gap, we introduce FPEval, a holistic evaluation framework built on FPBench, a new benchmark of 721 programming tasks across three difficulty levels on three mainstream FP languages: Haskell, Ocaml and Scala. FPEval provides compehensive evaluation infrastructures with both test validations with comprehensive test suites and static analysis tools to assess both functional correctness and code style and maintainability. Using this framework, we evaluate state-of-the-art LLMs, including GPT-3.5, GPT-4o, and GPT-5, for code generation in functional programming languages and Java as an imperative baseline. Our results demonstrate that LLM performance in functional programming improves substantially with model advancement; however, error rates remain significantly higher in purely functional languages (Haskell and OCaml) than in hybrid (Scala) or imperative (Java) languages. Moreover, LLMs frequently generate non-idiomatic functional code that follows imperative patterns, raising concerns about code style and long-term maintainability. Finally, we show that LLMs can partially self-repair both correctness and quality issues when provided with static analysis feedback and hand-crafted instructions for common types of issues.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Code Generation
Functional Programming
Evaluation Framework
Code Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

functional programming
large language models
code generation evaluation
static analysis
self-repair
🔎 Similar Papers
No similar papers found.