🤖 AI Summary
This work addresses the fundamental trade-off among form factor, weight, aesthetic appeal, and imaging quality in all-day-wearable lightweight smart glasses. We first derive the theoretical physical imaging limits for such devices and propose a novel distributed multi-view imaging paradigm: decomposing the conventional monolithic camera module into multiple miniaturized optical units, followed by high-fidelity image reconstruction via physics-informed multi-view fusion. Our methodology integrates theoretical limit analysis, distributed optical design, fusion algorithm development, and hardware-software co-verification. Two prototype systems are implemented, achieving over 60% reduction in camera module volume while maintaining imaging quality sufficient for everyday visual tasks. This significantly enhances wearability and social acceptability. The proposed architecture establishes a scalable foundation for next-generation lightweight smart glasses that simultaneously deliver high imaging fidelity and user-centric design.
📝 Abstract
In recent years smart glasses technology has rapidly advanced, opening up entirely new areas for mobile computing. We expect future smart glasses will need to be all-day wearable, adopting a small form factor to meet the requirements of volume, weight, fashionability and social acceptability, which puts significant constraints on the space of possible solutions. Additional challenges arise due to the fact that smart glasses are worn in arbitrary environments while their wearer moves and performs everyday activities. In this paper, we systematically analyze the space of imaging from smart glasses and derive several fundamental limits that govern this imaging domain. We discuss the impact of these limits on achievable image quality and camera module size -- comparing in particular to related devices such as mobile phones. We then propose a novel distributed imaging approach that allows to minimize the size of the individual camera modules when compared to a standard monolithic camera design. Finally, we demonstrate the properties of this novel approach in a series of experiments using synthetic data as well as images captured with two different prototype implementations.