🤖 AI Summary
This study investigates the impact of large language models (LLMs) on political discourse and the information ecosystem during the 2024 U.S. presidential election, and assesses the efficacy of existing electoral safeguards. Method: We conduct a longitudinal, high-frequency analysis—issuing over 12,000 structured, near-daily prompts—to track response dynamics, instruction sensitivity, and latent stances across 12 mainstream LLMs on election-related topics. Our approach innovatively integrates demographic-guided prompting, third-party tool integration, and an automated query pipeline. Contribution/Results: We present the first multi-model, end-to-end, high-resolution longitudinal study during a major election. Key findings include significant behavioral shifts across model versions, heightened sensitivity to demographic framing, discernible candidate attribute inference, implicit electoral leanings, and systematic response inconsistencies. All data and code are publicly released, establishing a benchmark resource for AI governance and election security research.
📝 Abstract
The 2024 US presidential election is the first major contest to occur in the US since the popularization of large language models (LLMs). Building on lessons from earlier shifts in media (most notably social media's well studied role in targeted messaging and political polarization) this moment raises urgent questions about how LLMs may shape the information ecosystem and influence political discourse. While platforms have announced some election safeguards, how well they work in practice remains unclear. Against this backdrop, we conduct a large-scale, longitudinal study of 12 models, queried using a structured survey with over 12,000 questions on a near-daily cadence from July through November 2024. Our design systematically varies content and format, resulting in a rich dataset that enables analyses of the models' behavior over time (e.g., across model updates), sensitivity to steering, responsiveness to instructions, and election-related knowledge and "beliefs." In the latter half of our work, we perform four analyses of the dataset that (i) study the longitudinal variation of model behavior during election season, (ii) illustrate the sensitivity of election-related responses to demographic steering, (iii) interrogate the models' beliefs about candidates' attributes, and (iv) reveal the models' implicit predictions of the election outcome. To facilitate future evaluations of LLMs in electoral contexts, we detail our methodology, from question generation to the querying pipeline and third-party tooling. We also publicly release our dataset at https://huggingface.co/datasets/sarahcen/llm-election-data-2024