How Low Can You Go? The Data-Light SE Challenge

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper challenges the prevailing “big data + high compute” optimization paradigm in software engineering (SE), exposing its inefficiency for many SE tasks. Method: We formalize labeling cost and identify strong label-efficiency boundaries across diverse SE tasks (e.g., configuration tuning, project health prediction, test optimization); propose a lightweight baseline algorithm family integrating diversity-aware sampling, minimalist Bayesian learners, and random probing; and define principled failure/success criteria. Contribution/Results: Empirically validated on multiple public SE benchmarks using SMAC, TPE, and DEHB as baselines, our approach achieves ≥90% of state-of-the-art optimizer performance using only 30–50 labeled samples—despite orders-of-magnitude lower annotation and computational overhead. The results demonstrate that lightweight methods are not merely feasible but highly competitive across dozens of SE optimization tasks, enabling rapid, low-cost engineering decisions without sacrificing efficacy.

Technology Category

Application Category

📝 Abstract
Much of software engineering (SE) research assumes that progress depends on massive datasets and CPU-intensive optimizers. Yet has this assumption been rigorously tested? The counter-evidence presented in this paper suggests otherwise: across dozens of optimization problems from recent SE literature, including software configuration and performance tuning, cloud and systems optimization, project and process-level decision modeling, behavioral analytics, financial risk modeling, project health prediction, reinforcement learning tasks, sales forecasting, and software testing, even with just a few dozen labels, very simple methods (e.g. diversity sampling, a minimal Bayesian learner, or random probes) achieve near 90% of the best reported results. Further, these simple methods perform just as well as more state-of-the-the-art optimizers like SMAC, TPE, DEHB etc. While some tasks would require better outcomes and more sampling, these results seen after a few dozen samples would suffice for many engineering needs (particularly when the goal is rapid and cost-efficient guidance rather than slow and exhaustive optimization). Our results highlight that some SE tasks may be better served by lightweight approaches that demand fewer labels and far less computation. We hence propose the data-light challenge: when will a handful of labels suffice for SE tasks? To enable a large-scale investigation of this issue, we contribute (1) a mathematical formalization of labeling, (2) lightweight baseline algorithms, and (3) results on public-domain data showing the conditions under which lightweight methods excel or fail. For the purposes of open science, our scripts and data are online at https://github.com/KKGanguly/NEO .
Problem

Research questions and friction points this paper is trying to address.

Challenges the need for massive datasets in software engineering research
Proposes lightweight methods requiring few labels for SE optimization tasks
Investigates conditions where simple approaches match advanced optimizers' performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight methods achieve near 90% of best results with few dozen labels
Simple techniques like diversity sampling match advanced optimizers' performance
Propose data-light challenge with formalization, baselines, and public results
🔎 Similar Papers
No similar papers found.