๐ค AI Summary
This work addresses semantic coverage imbalance (SCI) in visual datasetsโa phenomenon where models under-learn rare yet meaningful semantic concepts, leading to fairness deficiencies at the semantic level. The study formally defines and quantifies SCI for the first time and introduces SemCovNet, a novel framework that dynamically integrates semantic concepts with visual features through a Semantic Description Graph (SDG), a Description Attention Modulation (DAM) module, and a Description-Visual Alignment (DVA) loss to explicitly correct semantic coverage bias. To enable measurable evaluation of semantic fairness, the authors propose the Coverage Disparity Index (CDI). Experiments demonstrate that SemCovNet significantly reduces CDI across multiple datasets, thereby enhancing the modelโs ability to learn underrepresented concepts while improving reliability and interpretability.
๐ Abstract
Modern vision models increasingly rely on rich semantic representations that extend beyond class labels to include descriptive concepts and contextual attributes. However, existing datasets exhibit Semantic Coverage Imbalance (SCI), a previously overlooked bias arising from the long-tailed semantic representations. Unlike class imbalance, SCI occurs at the semantic level, affecting how models learn and reason about rare yet meaningful semantics. To mitigate SCI, we propose Semantic Coverage-Aware Network (SemCovNet), a novel model that explicitly learns to correct semantic coverage disparities. SemCovNet integrates a Semantic Descriptor Map (SDM) for learning semantic representations, a Descriptor Attention Modulation (DAM) module that dynamically weights visual and concept features, and a Descriptor-Visual Alignment (DVA) loss that aligns visual features with descriptor semantics. We quantify semantic fairness using a Coverage Disparity Index (CDI), which measures the alignment between coverage and error. Extensive experiments across multiple datasets demonstrate that SemCovNet enhances model reliability and substantially reduces CDI, achieving fairer and more equitable performance. This work establishes SCI as a measurable and correctable bias, providing a foundation for advancing semantic fairness and interpretable vision learning.