🤖 AI Summary
This work addresses the challenge that critical scientific information in automated microscopy is often embedded in sequentially acquired spectral or functional response landscapes, which cannot be directly revealed by conventional imaging. To tackle this, the authors propose BEACON, a novel framework that introduces novelty-driven active exploration into microscope-based target space discovery for the first time. BEACON integrates deep kernel learning to dynamically model structure–response relationships and guides the system toward efficiently exploring diverse response regions. The study establishes a reproducible benchmarking protocol that explicitly disentangles exploration quality from optimization performance and introduces a metric for evaluating target space coverage. Experiments demonstrate that BEACON significantly outperforms classical acquisition strategies on offline datasets and has been successfully deployed on a scanning transmission electron microscope (STEM) for efficient real-time scientific discovery.
📝 Abstract
Modern automated microscopy faces a fundamental discovery challenge: in many systems, the most important scientific information does not reside in the immediately visible image features, but in the target space of sequentially acquired spectra or functional responses, making it essential to develop strategies that can actively search for new behaviors rather than simply optimize known objectives. Here, we developed a deep-kernel-learning BEACON framework that is explicitly designed to guide discovery in the target space by learning structure-property relationships during the experiment and using that evolving model to seek diverse response regimes. We first established the method through demonstration workflows built on pre-acquired ground-truth datasets, which enabled direct benchmarking against classical acquisition strategies and allowed us to define a set of monitoring functions for comparing exploration quality, target-space coverage, and surrogate-model behavior in a transparent and reproducible manner. This benchmarking framework provides a practical basis for evaluating discovery-driven algorithms, not just optimization performance. We then operationalized and deployed the workflow on STEM, showing that the approach can transition from offline validation to real experimental implementation. To support adoption and extension by the broader community, the associated notebooks are available, allowing users to reproduce the workflows, test the benchmarks, and adapt the method to their own instruments and datasets.