The Matthew Effect of AI Programming Assistants: A Hidden Bias in Software Evolution

๐Ÿ“… 2025-09-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper identifies a โ€œMatthew effectโ€ induced by AI programming assistants in software ecosystem evolution: dominant programming languages and frameworks, benefiting from higher training-data frequency, achieve superior code-generation success rates, thereby reinforcing their popularity, accelerating convergence toward a few tools, and suppressing technological diversity and innovation. Method: We conduct the first systematic empirical investigation of this phenomenon through large-scale code-generation experiments spanning algorithm implementation and framework selection, quantifying success-rate disparities across technology stacks using multiple large language models. Contribution/Results: We find that a one-standard-deviation increase in technology popularity correlates with a 12.3% average gain in generation success rate; AI assistance significantly exacerbates technological concentration (Gini coefficient increases by 0.18). The study establishes an AI-driven technological lock-in mechanism and introduces a novel analytical framework for assessing the long-term health impact of AI on software ecosystems.

Technology Category

Application Category

๐Ÿ“ Abstract
AI-assisted programming is rapidly reshaping software development, with large language models (LLMs) enabling new paradigms such as vibe coding and agentic coding. While prior works have focused on prompt design and code generation quality, the broader impact of LLM-driven development on the iterative dynamics of software engineering remains underexplored. In this paper, we conduct large-scale experiments on thousands of algorithmic programming tasks and hundreds of framework selection tasks to systematically investigate how AI-assisted programming interacts with the software ecosystem. Our analysis reveals extbf{a striking Matthew effect: the more popular a programming language or framework, the higher the success rate of LLM-generated code}. The phenomenon suggests that AI systems may reinforce existing popularity hierarchies, accelerating convergence around dominant tools while hindering diversity and innovation. We provide a quantitative characterization of this effect and discuss its implications for the future evolution of programming ecosystems.
Problem

Research questions and friction points this paper is trying to address.

Investigates AI programming assistants' bias towards popular languages and frameworks
Reveals Matthew effect where dominant tools get reinforced by AI systems
Examines how AI-assisted development impacts software ecosystem diversity and innovation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed AI coding impact on software evolution
Revealed Matthew effect in LLM code generation
Quantified popularity bias in programming ecosystems
๐Ÿ”Ž Similar Papers