🤖 AI Summary
This work addresses the challenge of non-intrusively reverse-engineering user UI interactions from screencast videos to enable automated script generation. The proposed method introduces the first end-to-end screencast-to-structured-action parsing framework, employing a multi-task joint learning model that simultaneously classifies 11 interaction command types, localizes UI controls, and generates natural-language spatial descriptions (i.e., “command–control–location” triples). Technically, it integrates action-oriented video understanding, structured semantic parsing, and multimodal visual modeling. Evaluated on 7,260 real-world video–action pairs across five diverse applications (Microsoft Word, Zoom, Firefox, Photoshop, and Windows 10 Settings), the approach significantly improves UI-level bug reproduction efficiency. The tool is open-sourced and empirically validated. Key contributions include: (1) the first end-to-end paradigm for structured UI action parsing; (2) a novel multi-task collaborative modeling mechanism; and (3) a lightweight, cross-application generalizable video understanding architecture.
📝 Abstract
UI automation is a useful technique for UI testing, bug reproduction, and robotic process automation. Recording user actions with an application assists rapid development of UI automation scripts, but existing recording techniques are intrusive, rely on OS or GUI framework accessibility support, or assume specific app implementations. Reverse engineering user actions from screencasts is non-intrusive, but a key reverse-engineering step is currently missing - recognizing human-understandable structured user actions ([command] [widget] [location]) from action screencasts. To fill the gap, we propose a deep learning-based computer vision model that can recognize 11 commands and 11 widgets, and generate location phrases from action screencasts, through joint learning and multi-task learning. We label a large dataset with 7260 video-action pairs, which record user interactions with Word, Zoom, Firefox, Photoshop, and Windows 10 Settings. Through extensive experiments, we confirm the effectiveness and generality of our model, and demonstrate the usefulness of a screencast-to-action-script tool built upon our model for bug reproduction.