Reproducible Evaluation of Camera Auto-Exposure Methods in the Field: Platform, Benchmark and Lessons Learned

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing auto-exposure (AE) algorithm evaluation is constrained by fixed sensor parameters and dynamic illumination dependence, rendering online testing irreproducible. This paper introduces the first offline simulation benchmark framework capable of generating images with arbitrary exposure times. It leverages our newly constructed BorealHDR multi-exposure stereo dataset—including an extended version—and high-precision pose estimates (LiDAR-inertial fusion + GNSS) to synthesize realistic images with RMSE < 1.78%. Under unified, controllable conditions, we conduct reproducible offline evaluations of eight AE methods, confirming that classical approaches still achieve state-of-the-art performance. We publicly release our sensor platform design, 25+ km of real-world vehicle deployment experience, and complete code and datasets. This significantly enhances comparability, reproducibility, and accessibility in AE algorithm research.

Technology Category

Application Category

📝 Abstract
Standard datasets often present limitations, particularly due to the fixed nature of input data sensors, which makes it difficult to compare methods that actively adjust sensor parameters to suit environmental conditions. This is the case with Automatic-Exposure (AE) methods, which rely on environmental factors to influence the image acquisition process. As a result, AE methods have traditionally been benchmarked in an online manner, rendering experiments non-reproducible. Building on our prior work, we propose a methodology that utilizes an emulator capable of generating images at any exposure time. This approach leverages BorealHDR, a unique multi-exposure stereo dataset, along with its new extension, in which data was acquired along a repeated trajectory at different times of the day to assess the impact of changing illumination. In total, BorealHDR covers 13.4 km over 59 trajectories in challenging lighting conditions. The dataset also includes lidar-inertial-odometry-based maps with pose estimation for each image frame, as well as Global Navigation Satellite System (GNSS) data for comparison. We demonstrate that by using images acquired at various exposure times, we can emulate realistic images with a Root-Mean-Square Error (RMSE) below 1.78% compared to ground truth images. Using this offline approach, we benchmarked eight AE methods, concluding that the classical AE method remains the field's best performer. To further support reproducibility, we provide in-depth details on the development of our backpack acquisition platform, including hardware, electrical components, and performance specifications. Additionally, we share valuable lessons learned from deploying the backpack over more than 25 km across various environments. Our code and dataset are available online at this link: https://github.com/norlab-ulaval/TFR24 BorealHDR
Problem

Research questions and friction points this paper is trying to address.

Standard datasets limit AE method comparison due to fixed sensor data
Proposes emulator for reproducible AE evaluation using multi-exposure dataset
Benchmarks eight AE methods offline, finding classical method best
Innovation

Methods, ideas, or system contributions that make the work stand out.

Emulator generates images at any exposure time
Leverages BorealHDR multi-exposure stereo dataset
Offline benchmarking with RMSE below 1.78%
🔎 Similar Papers
No similar papers found.