🤖 AI Summary
To address event-response latency in high-throughput microfluidic single-cell imaging caused by continuous frame acquisition, this work proposes the first end-to-end event-driven microscopy framework integrating deep learning–based autofocus, real-time segmentation evaluation, and interactive visualization dashboards. We systematically benchmark 11 deep learning segmentation models under single-cell imaging constraints, identifying Cellpose v3 as optimal (Panoptic Quality: 93.58%). We further introduce a lightweight distance-transform-based segmentation method achieving both speed (121 ms inference) and accuracy (PQ: 93.02%). The autofocus module attains a mean absolute error of only 0.0226 μm with inference latency <50 ms. Experimental evaluation reveals that six baseline models fail to meet real-time requirements; our framework demonstrably enhances immediate capture and closed-loop analysis of stochastic biological events.
📝 Abstract
Microfluidic Live-Cell Imaging yields data on microbial cell factories. However, continuous acquisition is challenging as high-throughput experiments often lack realtime insights, delaying responses to stochastic events. We introduce three components in the Experiment Automation Pipeline for Event-Driven Microscopy to Smart Microfluidic Single-Cell Analysis: a fast, accurate Deep Learning autofocusing method predicting the focus offset, an evaluation of real-time segmentation methods and a realtime data analysis dashboard. Our autofocusing achieves a Mean Absolute Error of 0.0226 extmu m with inference times below 50~ms. Among eleven Deep Learning segmentation methods, Cellpose~3 reached a Panoptic Quality of 93.58%, while a distance-based method is fastest (121~ms, Panoptic Quality 93.02%). All six Deep Learning Foundation Models were unsuitable for real-time segmentation.