Multidimensional Bayesian Active Machine Learning of Working Memory Task Performance

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional cognitive experiments often rely on unidimensional staircase procedures, limiting their ability to characterize interactive effects between multiple cognitive load dimensions—such as spatial load (L) and feature-binding load (K). To address this, we propose a novel dual-axis adaptive experimental paradigm grounded in Bayesian active learning. Within a virtual reality working memory task, L and K are concurrently modulated; a Gaussian process classifier dynamically selects optimal trials based on posterior uncertainty, enabling efficient construction of individualized cognitive response surfaces. This represents the first implementation of two-variable active learning in cognitive experimentation, achieving rapid convergence with only ~30 trials. Validated in a young adult sample, the method demonstrates strong test–retest reliability (within-subject intraclass correlation coefficient = 0.755) and substantially improves both efficiency and individual specificity in modeling multidimensional cognitive load.

Technology Category

Application Category

📝 Abstract
While adaptive experimental design has outgrown one-dimensional, staircase-based adaptations, most cognitive experiments still control a single factor and summarize performance with a scalar. We show a validation of a Bayesian, two-axis, active- classification approach, carried out in an immersive virtual testing environment for a 5-by-5 working-memory reconstruction task. Two variables are controlled: spatial load L (number of occupied tiles) and feature-binding load K (number of distinct colors) of items. Stimulus acquisition is guided by posterior uncertainty of a nonparametric Gaussian Process (GP) probabilistic classifier, which outputs a surface over (L, K) rather than a single threshold or max span value. In a young adult population, we compare GP-driven Adaptive Mode (AM) with a traditional adaptive staircase Classic Mode (CM), which varies L only at K = 3. Parity between the methods is achieved for this cohort, with an intraclass coefficient of 0.755 at K = 3. Additionally, AM reveals individual differences in interactions between spatial load and feature binding. AM estimates converge more quickly than other sampling strategies, demonstrating that only about 30 samples are required for accurate fitting of the full model.
Problem

Research questions and friction points this paper is trying to address.

Develops multidimensional Bayesian active learning for cognitive tasks
Controls spatial and feature-binding loads in working memory experiments
Compares adaptive Gaussian Process method with traditional staircase approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian active classification with two controlled variables
Gaussian Process classifier outputs performance surface
Adaptive Mode converges faster than traditional methods
🔎 Similar Papers
No similar papers found.
D
Dom CP Marticorena
Department of Biomedical Engineering, Washington University, 1 Brookings Drive, St. Louis, MO 63130
C
Chris Wissmann
Department of Biomedical Engineering, Washington University, 1 Brookings Drive, St. Louis, MO 63130
Zeyu Lu
Zeyu Lu
Shanghai Jiao Tong University
AIGCLarge Language ModelDiffusion Model
D
Dennis L Barbour
Department of Biomedical Engineering, Washington University, 1 Brookings Drive, St. Louis, MO 63130; Department of Computer Science and Engineering, Washington University, 1 Brookings Drive, St. Louis, MO 63130