Integrating Large Language Models and Evaluating Student Outcomes in an Introductory Computer Science Course

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The pedagogical impact mechanisms of large language models (LLMs) in CS1 instruction remain poorly understood. Method: This study systematically redesigned an undergraduate introductory computer science course by deeply integrating code-generation LLMs as learning aids—not substitutes—guided by three design principles: (1) open-ended projects fostering AI-augmented collaboration, (2) reconceptualized formative assessment emphasizing process-oriented competencies, and (3) scaffolded development of critical tool literacy. A mixed-methods empirical evaluation was conducted, combining performance analytics, surveys, and qualitative interviews. Results: Final exam scores remained statistically consistent with historical baselines, while open-ended project quality improved significantly. Over 80% of students reported enhanced conceptual understanding and debugging efficiency with LLMs, though most expressed concerns about overreliance. This work provides the first empirical validation in CS1 of the “AI-augmented learning” paradigm, demonstrating its feasibility and pedagogical value for foundational computing education.

Technology Category

Application Category

📝 Abstract
Generative AI (GenAI) models have broad implications for education in general, impacting the foundations of what we teach and how we assess. This is especially true in computing, where LLMs tuned for coding have demonstrated shockingly good performance on the types of assignments historically used in introductory CS (CS1) courses. As a result, CS1 courses will need to change what skills are taught and how they are assessed. Computing education researchers have begun to study student use of LLMs, but there remains much to be understood about the ways that these tools affect student outcomes. In this paper, we present the design and evaluation of a new CS1 course at a large research-intensive university that integrates the use of LLMs as a learning tool for students. We describe the design principles used to create our new CS1-LLM course, our new course objectives, and evaluation of student outcomes and perceptions throughout the course as measured by assessment scores and surveys. Our findings suggest that 1) student exam performance outcomes, including differences among demographic groups, are largely similar to historical outcomes for courses without integration of LLM tools, 2) large, open-ended projects may be particularly valuable in an LLM context, and 3) students predominantly found the LLM tools helpful, although some had concerns regarding over-reliance on the tools.
Problem

Research questions and friction points this paper is trying to address.

Integrating LLMs as learning tools in CS1 courses
Evaluating student outcomes and perceptions with LLMs
Assessing exam performance and project value with AI tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating LLMs as learning tools in CS1
Creating new course objectives with LLMs
Evaluating student outcomes through assessments and surveys
🔎 Similar Papers
No similar papers found.