🤖 AI Summary
To address insufficient frame identification and synchronization robustness in screen-to-camera visible light communication (S2C-VLC) under mobile scenarios—where motion-induced image blur, cropping, and rotation degrade performance—this paper proposes a lightweight supervised CNN-based approach. The method introduces a novel auxiliary synchronization frame structure and incorporates temporal alignment priors to achieve high-accuracy frame boundary detection and real-time synchronization. Implemented in TensorFlow Keras, the model is trained end-to-end on a custom dynamic distortion dataset and contains only 1.2 million parameters. Experimental results demonstrate a frame identification accuracy of 98.74%, surpassing conventional methods by over 12 percentage points. The model maintains stable performance under severe conditions including high-speed translation, rotation, and motion blur. This significantly enhances the practicality and robustness of S2C-VLC systems for short-range mobile communication.
📝 Abstract
This paper proposes a novel, robust, and lightweight supervised Convolutional Neural Network (CNN)-based technique for frame identification and synchronization, designed to enhance short-link communication performance in a screen-to-camera (S2C) based visible light communication (VLC) system. Developed using Python and the TensorFlow Keras framework, the proposed CNN model was trained through three real-time experimental investigations conducted in Jupyter Notebook. These experiments incorporated a dataset created from scratch to address various real-time challenges in S2C communication, including blurring, cropping, and rotated images in mobility scenarios. Overhead frames were introduced for synchronization, which leads to enhanced system performance. The experimental results demonstrate that the proposed model achieves an overall accuracy of approximately 98.74%, highlighting its effectiveness in identifying and synchronizing frames in S2C VLC systems.