Multicalibration for LLM-based Code Generation

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the misalignment between confidence scores and actual correctness in code generation by large language models (LLMs). We propose the first multidimensional calibration framework tailored for code generation, introducing multicalibration—previously unexplored in the code domain—to enable conditional calibration across fine-grained attributes such as programming language, problem complexity, and generated code length. We implement four multicalibration methods on state-of-the-art models—including Qwen3 Coder, GPT-OSS, and DeepSeek-R1-Distill—within a function synthesis benchmark. Experimental results show that our approach improves skill score by 1.03 over the uncalibrated baseline and outperforms conventional calibration methods by 0.37. To foster reproducibility and further research, we publicly release the first standardized code calibration dataset, comprising generated code samples, likelihood estimates, and ground-truth correctness labels.

Technology Category

Application Category

📝 Abstract
As AI-based code generation becomes widespread, researchers are investigating the calibration of code LLMs - ensuring their confidence scores faithfully represent the true likelihood of code correctness. To do so, we investigate multicalibration, which can capture additional factors about a coding problem, such as complexity, code length, or programming language used. We study four multicalibration approaches on three function synthesis benchmarks, using latest-generation code LLMs (Qwen3 Coder, GPT-OSS, DeepSeek-R1-Distill). Our results demonstrate that multicalibration can yield distinct improvements over both uncalibrated token likelihoods (+1.03 in skill score) and baseline calibrations (+0.37 in skill score). We study the influence of the aforementioned factors in ablations, and make our dataset (consisting of code generations, likelihoods, and correctness labels) available for future research on code LLM calibration.
Problem

Research questions and friction points this paper is trying to address.

Ensuring code LLMs' confidence scores reflect true correctness likelihood
Investigating multicalibration to capture coding problem factors like complexity
Improving calibration over uncalibrated token likelihoods and baseline methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multicalibration improves code LLM confidence scores
It incorporates problem complexity and code length
Approach tested on latest code generation models
🔎 Similar Papers
No similar papers found.
Viola Campos
Viola Campos
RheinMain University of Applied Sciences
Machine LearningSoftware EngineeringLLM4SE
R
Robin Kuschnereit
RheinMain University of Applied Sciences, Wiesbaden, Germany
A
A. Ulges
RheinMain University of Applied Sciences, Wiesbaden, Germany