Vision At Night: Exploring Biologically Inspired Preprocessing For Improved Robustness Via Color And Contrast Transformations

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Semantic segmentation performance degrades significantly under out-of-distribution adverse conditions—such as nighttime, fog, and snow—due to degraded visual cues. To address this, we propose a biologically inspired, lightweight input preprocessing method that mimics the human retina’s contrast enhancement (via Difference-of-Gaussians filtering applied to RGB, grayscale, and opponent-color channels) and color antagonism mechanisms. Our approach is plug-and-play: it requires no architectural modifications or model retraining. It is model-agnostic and computationally efficient. Evaluated on Cityscapes, ACDC, and Dark Zurich, the method preserves accuracy on in-distribution data (e.g., clear daytime scenes) while substantially improving robustness and generalization stability under challenging conditions—including low illumination, fog, and snow. This establishes an effective and reliable preprocessing paradigm for semantic segmentation in open-world environments.

Technology Category

Application Category

📝 Abstract
Inspired by the human visual system's mechanisms for contrast enhancement and color-opponency, we explore biologically motivated input preprocessing for robust semantic segmentation. By applying Difference-of-Gaussians (DoG) filtering to RGB, grayscale, and opponent-color channels, we enhance local contrast without modifying model architecture or training. Evaluations on Cityscapes, ACDC, and Dark Zurich show that such preprocessing maintains in-distribution performance while improving robustness to adverse conditions like night, fog, and snow. As this processing is model-agnostic and lightweight, it holds potential for integration into imaging pipelines, enabling imaging systems to deliver task-ready, robust inputs for downstream vision models in safety-critical environments.
Problem

Research questions and friction points this paper is trying to address.

Enhancing semantic segmentation robustness through biologically inspired preprocessing
Applying color and contrast transformations to improve adverse condition performance
Developing model-agnostic preprocessing for safety-critical vision systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Biologically inspired preprocessing for robust segmentation
DoG filtering on color channels enhances local contrast
Model-agnostic method maintains performance in adverse conditions
🔎 Similar Papers
No similar papers found.
L
Lorena Stracke
Media Systems, University of Siegen, Germany
L
Lia Nimmermann
Media Systems, University of Siegen, Germany
Shashank Agnihotri
Shashank Agnihotri
University of Mannheim
Deep LearningComputer VisionRobustnessGeneralization
Margret Keuper
Margret Keuper
University of Mannheim
Machines Learning & Computer Vision
V
Volker Blanz
Media Systems, University of Siegen, Germany