🤖 AI Summary
Traditional cognitive assessments rely on highly controlled, brief tasks, limiting their ability to capture intra- and inter-individual variability. To address this, we introduce PixelDOPA—a Unity-based suite of immersive 3D microgames designed to systematically measure processing speed, rule shifting, inhibitory control, and working memory, while synchronously recording behavioral responses, eye movements, and motion trajectories. Our approach innovates by incorporating process-oriented metrics (e.g., oculomotor response timing) and unsupervised trajectory modeling to uncover stable, strategy-specific behavioral patterns. In a clinical sample of 60 participants, PixelDOPA demonstrated strong construct validity and test–retest reliability, comparable to the NIH Toolbox (Pearson’s *r* = 0.50–0.93; ICC = 0.50–0.92), yet with superior ecological validity. Furthermore, behavioral trajectories enabled robust cross-session participant identification, significantly enhancing sensitivity to individual differences and improving overall data quality.
📝 Abstract
Studies of human cognition often rely on brief, highly controlled tasks that emphasize group-level effects but poorly capture the rich variability within and between individuals. Here, we present PixelDOPA, a suite of minigames designed to overcome these limitations by embedding classic cognitive task paradigms in an immersive 3D virtual environment with continuous behavior logging. Four minigames explore overlapping constructs such as processing speed, rule shifting, inhibitory control and working memory, comparing against established NIH Toolbox tasks. Across a clinical sample of 60 participants collected outside a controlled laboratory setting, we found significant, large correlations (r = 0.50-0.93) between the PixelDOPA tasks and NIH Toolbox counterparts, despite differences in stimuli and task structures. Process-informed metrics (e.g., gaze-based response times derived from continuous logging) substantially improved both task convergence and data quality. Test-retest analyses revealed high reliability (ICC = 0.50-0.92) for all minigames. Beyond endpoint metrics, movement and gaze trajectories revealed stable, idiosyncratic profiles of gameplay strategy, with unsupervised clustering distinguishing subjects by their navigational and viewing behaviors. These trajectory-based features showed lower within-person variability than between-person variability, facilitating player identification across repeated sessions. Game-based tasks can therefore retain the psychometric rigor of standard cognitive assessments while providing new insights into dynamic behaviors. By leveraging a highly engaging, fully customizable game engine, we show that comprehensive behavioral tracking boosts the power to detect individual differences--offering a path toward cognitive measures that are both robust and ecologically valid, even in less-than-ideal settings for data collection.