🤖 AI Summary
In human-computer interaction (HCI) research, multimodal physiological (e.g., EEG, ECG, GSR) and behavioral experiments face significant challenges—including high cost, cumbersome hardware debugging, and complex cross-device synchronization. To address these, this paper introduces the first cross-platform software framework enabling real-time simulation, dynamic signal modification, and nanosecond-precision temporal alignment of heterogeneous, geographically distributed physiological and behavioral devices. The framework employs signal-flow abstraction, device-agnostic APIs, a high-accuracy timestamp synchronization engine, and an integrated visualization interface—supporting hardware-free experimental rehearsal, protocol validation, and robustness testing. Its novel remote broadcast-based simulation mechanism drastically reduces experimental setup time. Deployed across multiple HCI laboratories, it has successfully accelerated pilot studies for eye-tracking–physiology hybrid paradigms. This work establishes a reusable, extensible, and standardized simulation infrastructure for multimodal user research.
📝 Abstract
Conducting user studies that involve physiological and behavioral measurements is very time-consuming and expensive, as it not only involves a careful experiment design, device calibration, etc. but also a careful software testing. We propose Thalamus, a software toolkit for collecting and simulating multimodal signals that can help the experimenters to prepare in advance for unexpected situations before reaching out to the actual study participants and even before having to install or purchase a specific device. Among other features, Thalamus allows the experimenter to modify, synchronize, and broadcast physiological signals (as coming from various data streams) from different devices simultaneously and not necessarily located in the same place. Thalamus is cross-platform, cross-device, and simple to use, making it thus a valuable asset for HCI research.