A Multi-Year Urban Streetlight Imagery Dataset for Visual Monitoring and Spatio-Temporal Drift Detection

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses model degradation in long-term deployment of vision systems for smart cities. We introduce the first large-scale, real-world urban dataset—comprising over 526,000 hours of streetlamp imagery captured from 22 fixed viewpoints in Bristol between 2021 and 2025—featuring fine-grained spatiotemporal metadata and rich diversity across seasons, weather conditions, and illumination levels. To tackle the challenge of visual drift modeling, we propose a node-level and diurnal self-supervised drift representation framework based on a CNN-VAE architecture, and pioneer a dual-metric quantification method: relative centroid drift and normalized reconstruction error. Our MLOps strategy significantly enhances model sensitivity to drift and long-term stability over a four-year operational horizon. The dataset is publicly released on Zenodo (JPEG/CSV format) and supports downstream tasks including streetlamp status monitoring, weather inversion, and scene understanding.

Technology Category

Application Category

📝 Abstract
We present a large-scale, longitudinal visual dataset of urban streetlights captured by 22 fixed-angle cameras deployed across Bristol, U.K., from 2021 to 2025. The dataset contains over 526,000 images, collected hourly under diverse lighting, weather, and seasonal conditions. Each image is accompanied by rich metadata, including timestamps, GPS coordinates, and device identifiers. This unique real-world dataset enables detailed investigation of visual drift, anomaly detection, and MLOps strategies in smart city deployments. To promtoe seconardary analysis, we additionally provide a self-supervised framework based on convolutional variational autoencoders (CNN-VAEs). Models are trained separately for each camera node and for day/night image sets. We define two per-sample drift metrics: relative centroid drift, capturing latent space deviation from a baseline quarter, and relative reconstruction error, measuring normalized image-domain degradation. This dataset provides a realistic, fine-grained benchmark for evaluating long-term model stability, drift-aware learning, and deployment-ready vision systems. The images and structured metadata are publicly released in JPEG and CSV formats, supporting reproducibility and downstream applications such as streetlight monitoring, weather inference, and urban scene understanding. The dataset can be found at https://doi.org/10.5281/zenodo.17781192 and https://doi.org/10.5281/zenodo.17859120.
Problem

Research questions and friction points this paper is trying to address.

Monitoring urban streetlight visual changes over multiple years.
Detecting spatio-temporal drift in smart city camera deployments.
Evaluating long-term model stability for vision systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fixed-angle cameras capture longitudinal urban streetlight imagery
Self-supervised CNN-VAE framework models per-camera day/night drift
Drift metrics measure latent space deviation and reconstruction error
🔎 Similar Papers
No similar papers found.
P
Peizheng Li
Bristol Research and Innovation Laboratory, Toshiba Europe Ltd., 30 Queen Square, Bristol, BS1 4ND, United Kingdom
Ioannis Mavromatis
Ioannis Mavromatis
Lead 5G/Future Networks Technologist, Digital Catapult
5G/6GIoTCloud-nativeMachine LearningSustainability
A
Ajith Sahadevan
Bristol Research and Innovation Laboratory, Toshiba Europe Ltd., 30 Queen Square, Bristol, BS1 4ND, United Kingdom
T
Tim Farnham
Bristol Research and Innovation Laboratory, Toshiba Europe Ltd., 30 Queen Square, Bristol, BS1 4ND, United Kingdom
Adnan Aijaz
Adnan Aijaz
Toshiba Europe Ltd.
B5G/6GIndustrial SystemsTime-sensitive NetworkingOpen RANQuantum Internet
Aftab Khan
Aftab Khan
Distributed AI Programme Lead, Toshiba Europe Ltd.
Machine learningDistributed AIFederated LearningMLOpsFedOps