🤖 AI Summary
AI systems have historically struggled to deeply engage in real-world physical experiments, remaining largely confined to computational design tasks.
Method: This work introduces the first end-to-end AI co-scientist system, integrating multimodal large language model agents, real-time perception via smart glasses, extended reality (XR)–enabled human–AI interaction, and self-evolving algorithms to achieve visual synchronization with experimental environments, contextual understanding, and dynamic collaboration.
Contribution/Results: The system overcomes the longstanding limitation of AI as a passive computational tool by enabling real-time, embodied collaboration with human scientists in live laboratory settings. Evaluated on complex, high-impact tasks—including cancer immunotherapy target discovery and stem cell engineering—the system demonstrates significant improvements in experimental throughput, reproducibility, and scientific insight generation. It establishes a foundational framework for intelligent laboratories, advancing the paradigm toward human–AI symbiotic scientific discovery.
📝 Abstract
Modern science advances fastest when thought meets action. LabOS represents the first AI co-scientist that unites computational reasoning with physical experimentation through multimodal perception, self-evolving agents, and Entended-Reality(XR)-enabled human-AI collaboration. By connecting multi-model AI agents, smart glasses, and human-AI collaboration, LabOS allows AI to see what scientists see, understand experimental context, and assist in real-time execution. Across applications--from cancer immunotherapy target discovery to stem-cell engineering -- LabOS shows that AI can move beyond computational design to participation, turning the laboratory into an intelligent, collaborative environment where human and machine discovery evolve together.