OASIC: Occlusion-Agnostic and Severity-Informed Classification

πŸ“… 2026-04-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Severe occlusion leads to significant loss and corruption of target information, substantially degrading classification performance. This work proposes an occlusion-agnostic and severity-adaptive classification method that enhances robustness during training through multi-level random masking and, at test time, dynamically masks disruptive regions based on visual anomaly detection to estimate occlusion severity and adaptively select the optimal model. To the best of our knowledge, this is the first approach to jointly achieve occlusion-type invariance and severity-aware optimization. On occluded images, the proposed method improves AUC_occ by 18.5% over standard training and by 23.7% compared to fine-tuning without occlusion.
πŸ“ Abstract
Severe occlusions of objects pose a major challenge for computer vision. We show that two root causes are (1) the loss of visible information and (2) the distracting patterns caused by the occluders. Our approach addresses both causes at the same time. First, the distracting patterns are removed at test-time, via masking of the occluding patterns. This masking is independent of the type of occlusion, by handling the occlusion through the lens of visual anomalies w.r.t. the object of interest. Second, to deal with less visual details, we follow standard practice by masking random parts of the object during training, for various degrees of occlusions. We discover that (a) it is possible to estimate the degree of the occlusion (i.e. severity) at test-time, and (b) that a model optimized for a specific degree of occlusion also performs best on a similar degree during test-time. Combining these two insights brings us to a severity-informed classification model called OASIC: Occlusion Agnostic Severity Informed Classification. We estimate the severity of occlusion for a test image, mask the occluder, and select the model that is optimized for the degree of occlusion. This strategy performs better than any single model optimized for any smaller or broader range of occlusion severities. Experiments show that combining gray masking with adaptive model selection improves $\text{AUC}_\text{occ}$ by +18.5 over standard training on occluded images and +23.7 over finetuning on unoccluded images.
Problem

Research questions and friction points this paper is trying to address.

occlusion
object classification
visual anomalies
severity estimation
computer vision
Innovation

Methods, ideas, or system contributions that make the work stand out.

occlusion-agnostic
severity-informed classification
visual anomaly masking
adaptive model selection
occlusion severity estimation
πŸ”Ž Similar Papers
No similar papers found.