🤖 AI Summary
To address the degradation of measurement accuracy in vision-based metrology systems caused by image blur and detail loss, this paper proposes a lightweight single-image super-resolution (SISR) method. Methodologically, we introduce a novel semantic-guided reconstruction framework that leverages pretrained semantic priors to steer high-frequency recovery; design a global-local collaborative module integrated with a hybrid attention mechanism to enable efficient multi-scale feature modeling; and adopt a lightweight convolutional architecture to minimize computational overhead. Extensive experiments on multiple benchmark datasets demonstrate state-of-the-art performance in PSNR and SSIM, with a reduction of 12.81 G FLOPs compared to existing methods. The proposed approach significantly improves reconstruction fidelity, metrological accuracy, and real-time inference capability—making it particularly suitable for resource-constrained visual measurement applications.
📝 Abstract
Single-Image Super-Resolution (SISR) plays a pivotal role in enhancing the accuracy and reliability of measurement systems, which are integral to various vision-based instrumentation and measurement applications. These systems often require clear and detailed images for precise object detection and recognition. However, images captured by visual measurement tools frequently suffer from degradation, including blurring and loss of detail, which can impede measurement accuracy.As a potential remedy, we in this paper propose a Semantic-Guided Global-Local Collaborative Network (SGGLC-Net) for lightweight SISR. Our SGGLC-Net leverages semantic priors extracted from a pre-trained model to guide the super-resolution process, enhancing image detail quality effectively. Specifically,we propose a Semantic Guidance Module that seamlessly integrates the semantic priors into the super-resolution network, enabling the network to more adeptly capture and utilize semantic priors, thereby enhancing image details. To further explore both local and non-local interactions for improved detail rendition,we propose a Global-Local Collaborative Module, which features three Global and Local Detail Enhancement Modules, as well as a Hybrid Attention Mechanism to work together to efficiently learn more useful features. Our extensive experiments show that SGGLC-Net achieves competitive PSNR and SSIM values across multiple benchmark datasets, demonstrating higher performance with the multi-adds reduction of 12.81G compared to state-of-the-art lightweight super-resolution approaches. These improvements underscore the potential of our approach to enhance the precision and effectiveness of visual measurement systems. Codes are at https://github.com/fanamber831/SGGLC-Net.