🤖 AI Summary
This study investigates socioeconomic status (SES) disparities in the use of generative AI for college application essays and their implications for admissions fairness. Analyzing over 80,000 anonymized applications to elite U.S. universities from 2020 to 2024, the research employs a novel distributed large language model (LLM) usage detector—trained on both synthetic and historical texts—and uses fee waiver status as a proxy for SES. The findings reveal that while low-SES applicants exhibit a faster rate of increase in LLM adoption, their use of AI-generated content is significantly associated with reduced admission likelihood. This work provides the first large-scale longitudinal evidence suggesting that AI-assisted writing may exacerbate educational inequities and undermine the validity of essay-based evaluations in selective admissions processes.
📝 Abstract
Large language models (LLMs) have become popular writing tools among students and may expand access to high-quality feedback for students with less access to traditional writing support. At the same time, LLMs may standardize student voice or invite overreliance. This study examines how adoption of LLM-assisted writing varies across socioeconomic groups and how it relates to outcomes in a high-stakes context: U.S. college admissions. We analyze a de-identified longitudinal dataset of applications to a selective university from 2020 to 2024 (N = 81,663). Estimating LLM use using a distribution-based detector trained on synthetic and historical essays, we tracked how student writing changed as LLM use proliferated, how adoption differed by socioeconomic status (SES), and whether potential benefits translated equitably into admissions outcomes. Using fee-waiver status as a proxy for SES, we observe post-2023 convergence in surface-level linguistic features, with the largest changes in fee-waived and rejected applicants. Estimated LLM use rose sharply in 2024 across all groups, with disproportionately larger increases among lower SES applicants, consistent with an access hypothesis in which LLMs substitute for scarce writing support. However, increased estimated LLM use was more strongly associated with declines in predicted admission probability for lower SES applicants than for higher SES applicants, even after controlling for academic credentials and stylometric features. These findings raise concerns about equity and the validity of essay-based evaluation in an era of AI-assisted writing and provide the first large-scale longitudinal evidence linking LLM adoption, linguistic change, and evaluative outcomes in college admissions.