From Brittle to Robust: Improving LLM Annotations for SE Optimization

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor performance of large language models (LLMs) in high-dimensional multi-objective software engineering (SE) optimization tasks, often caused by annotation blind spots. To overcome this limitation, the authors propose SynthCore, a novel approach that, for the first time, integrates non-interactive, multi-perspective few-shot LLM annotations through ensemble learning to significantly enhance both annotation quality and optimization performance. SynthCore avoids complex prompting strategies, offering a streamlined and efficient pipeline. Extensive experiments across 49 real-world SE tasks demonstrate that SynthCore, relying solely on LLM-generated annotations, consistently outperforms state-of-the-art baselines such as Gaussian Processes and Tree of Parzen Estimators, thereby validating its effectiveness and innovation in high-dimensional multi-objective optimization.

Technology Category

Application Category

📝 Abstract
Software analytics often builds from labeled data. Labeling can be slow, error prone, and expensive. When human expertise is scarce, SE researchers sometimes ask large language models (LLMs) for the missing labels. While this has been successful in some domains, recent results show that LLM-based labeling has blind spots. Specifically, their labeling is not effective for higher dimensional multi-objective problems. To address this task, we propose a novel LLM prompting strategy called SynthCore. When one opinion fails, SynthCore's combines multiple separated opinions generated by LLMs (with no knowledge of each others' answers) into an ensemble of few-shot learners. Simpler than other strategies (e.g. chain-of-thought, multi-agent-debate, etc) SynthCore aggregates results from multiple single prompt sessions (with no crossover between them). SynthCore has been tested on 49 SE multi-objective optimization tasks, handling tasks as diverse as software project management, Makefile configuration, and hyperparameter optimization. SynthCore's ensemble found optimizations that are better than state-of-the-art alternative approaches (Gaussian Process Models, Tree of Parzen Estimators, active learners in both exploration and exploitation mode). Importantly, these optimizations were made using data labeled by LLMs, without any human opinions. From these experiments, we conclude that ensembles of few shot learners can successfully annotate high dimensional multi-objective tasks. Further, we speculate that other successful few-shot prompting results could be quickly and easily enhanced using SynthCore's ensemble approach. To support open science, all our data and scripts are available at https://github.com/lohithsowmiyan/lazy-llm/tree/clusters.
Problem

Research questions and friction points this paper is trying to address.

LLM annotation
multi-objective optimization
software engineering
high-dimensional labeling
data annotation
Innovation

Methods, ideas, or system contributions that make the work stand out.

SynthCore
LLM prompting
ensemble few-shot learning
multi-objective optimization
software analytics