🤖 AI Summary
This work addresses the challenge of transforming unstructured feedback from simulated user agents in usability testing into actionable user experience insights. To this end, it introduces UXCascade, an interactive analysis tool that pioneers a multi-level analytical framework integrating user personas, task objectives, and usability issues to link agent reasoning traces with specific interface problems. The system supports exploratory analysis—from macro-level patterns to micro-level refinements—through structured overviews, reasoning trace visualization, annotation views, and interactive editing capabilities. A user study demonstrates that UXCascade effectively integrates into existing UX workflows, facilitating rapid iteration during early design stages and yielding high-value, actionable feedback.
📝 Abstract
Simulated user agents are increasingly used in usability testing to support fast, iterative UX workflows, as they generate rich data such as action logs and think-aloud reasoning, but the unstructured nature of this output often obscures actionable insights. We present UXCascade, an interactive tool for extracting, aggregating, and presenting agent-generated usability feedback at scale. Our core contribution is a multi-level analysis workflow that (1) highlights patterns across persona traits, goals, and outcomes, (2) links agent reasoning to specific issues, and (3) supports actionable design improvements. UXCascade operationalizes this approach by listing agent goals, traits, and issues in a structured overview. Practitioners can explore detailed reasoning traces and annotated views, propose interface edits, and assess their impact across personas. This enables a top-down, exploration-driven analysis from patterns to concrete UX interventions. A user study with eight UX professionals demonstrates that UXCascade integrates into existing workflows, enabling iterative feedback during early-stage interface development.