DraftMarks: Enhancing Transparency in Human-AI Co-Writing Through Interactive Skeuomorphic Process Traces

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
As generative AI becomes increasingly integrated into writing practices, the opacity of human-AI collaboration impedes readers’ ability to assess the relative contributions of human authors and AI systems. Method: We propose “materialized process traces”—a visualization technique that encodes writer-AI interaction data (e.g., edit frequency, source attribution) into physical metaphors (e.g., eraser shavings, tape residues, smudges) embedded directly in the rendered text. Using algorithmic modeling of collaboration metrics and writing trajectories, these traces are integrated into an augmented reading interface to support process legibility. Contribution/Results: Empirical evaluation demonstrates that this approach significantly improves diverse readers’ comprehension and judgment of human effort distribution, interaction evolution, and creative pathways in AI-augmented writing. It provides a scalable technical framework and design paradigm for transparency assessment in AI co-authored artifacts.

Technology Category

Application Category

📝 Abstract
As generative AI becomes part of everyday writing, questions of transparency and productive human effort are increasingly important. Educators, reviewers, and readers want to understand how AI shaped the process. Where was human effort focused? What role did AI play in the creation of the work? How did the interaction unfold? Existing approaches often reduce these dynamics to summary metrics or simplified provenance. We introduce DraftMarks, an augmented reading tool that surfaces the human-AI writing process through familiar physical metaphors. DraftMarks employs skeuomorphic encodings such as eraser crumbs to convey the intensity of revision, and masking tape or smudges to mark AI-generated content, simulating the process within the final written artifact. By using data from writer-AI interactions, DraftMarks' algorithm computes various collaboration metrics and writing traces. Through a formative study, we identified computational logic for different readership, and evaluated DraftMarks for its effectiveness in assessing AI co-authored writing.
Problem

Research questions and friction points this paper is trying to address.

Enhancing transparency in human-AI collaborative writing processes
Visualizing AI contributions and human effort through physical metaphors
Providing interactive tools to assess AI co-authored writing effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses skeuomorphic encodings for human-AI writing traces
Computes collaboration metrics from writer-AI interaction data
Provides interactive reading tool with physical metaphors
🔎 Similar Papers
No similar papers found.
M
Momin N. Siddiqui
Georgia Institute of Technology
N
Nikki Nasseri
University of California, Berkeley
A
Adam Coscia
Georgia Institute of Technology
Roy Pea
Roy Pea
David Jack Professor of Learning Sciences and Education, and Computer Science (Courtesy)
Learning scienceseducational technologiescollaborative learningSTEM learning
H
Hari Subramonyam
Stanford University