Adaptive Illumination Control for Robot Perception

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of visual SLAM in low-light or high dynamic range environments, where perception limitations are exacerbated by the inherent quality constraints of captured images that conventional image enhancement methods cannot overcome. To tackle this challenge, the authors propose Lightning, a novel framework that integrates, for the first time, a physically consistent co-located illumination decomposition (CLID) relighting model with an offline optimal illumination scheduling (OIS) strategy. Building upon this, they learn a generalizable, real-time illumination control policy (ILC) via behavior cloning. The resulting system enables closed-loop adaptive illumination, significantly improving the robustness of SLAM trajectories while reducing energy consumption, and demonstrates efficient online operation on a real mobile robot platform.

Technology Category

Application Category

📝 Abstract
Robot perception under low light or high dynamic range is usually improved downstream - via more robust feature extraction, image enhancement, or closed-loop exposure control. However, all of these approaches are limited by the image captured these conditions. An alternate approach is to utilize a programmable onboard light that adds to ambient illumination and improves captured images. However, it is not straightforward to predict its impact on image formation. Illumination interacts nonlinearly with depth, surface reflectance, and scene geometry. It can both reveal structure and induce failure modes such as specular highlights and saturation. We introduce Lightning, a closed-loop illumination-control framework for visual SLAM that combines relighting, offline optimization, and imitation learning. This is performed in three stages. First, we train a Co-Located Illumination Decomposition (CLID) relighting model that decomposes a robot observation into an ambient component and a light-contribution field. CLID enables physically consistent synthesis of the same scene under alternative light intensities and thereby creates dense multi-intensity training data without requiring us to repeatedly re-run trajectories. Second, using these synthesized candidates, we formulate an offline Optimal Intensity Schedule (OIS) problem that selects illumination levels over a sequence trading off SLAM-relevant image utility against power consumption and temporal smoothness. Third, we distill this ideal solution into a real-time controller through behavior cloning, producing an Illumination Control Policy (ILC) that generalizes beyond the initial training distribution and runs online on a mobile robot to command discrete light-intensity levels. Across our evaluation, Lightning substantially improves SLAM trajectory robustness while reducing unnecessary illumination power.
Problem

Research questions and friction points this paper is trying to address.

robot perception
low light
illumination control
visual SLAM
high dynamic range
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive illumination control
visual SLAM
relighting
imitation learning
optimal intensity scheduling
🔎 Similar Papers
No similar papers found.