🤖 AI Summary
This work addresses the challenge of long-tailed distributions in scene graph generation (SGG), which often cause existing debiasing methods to inadvertently degrade a model’s spatial reasoning capabilities. To mitigate this issue, the authors propose Salience-SGG, a novel framework that introduces semantics-agnostic salience labels for the first time and incorporates an Iterative Salience Decoder (ISD). The ISD enhances spatial awareness by selectively reinforcing triplets with prominent spatial structures during training, thereby achieving effective debiasing without compromising spatial understanding. The proposed method achieves state-of-the-art performance on Visual Genome, Open Images V6, and GQA-200 benchmarks and significantly improves the spatial localization accuracy of existing Unbiased-SGG approaches.
📝 Abstract
Scene Graph Generation (SGG) suffers from a long-tailed distribution, where a few predicate classes dominate while many others are underrepresented, leading to biased models that underperform on rare relations. Unbiased-SGG methods address this issue by implementing debiasing strategies, but often at the cost of spatial understanding, resulting in an over-reliance on semantic priors. We introduce Salience-SGG, a novel framework featuring an Iterative Salience Decoder (ISD) that emphasizes triplets with salient spatial structures. To support this, we propose semantic-agnostic salience labels guiding ISD. Evaluations on Visual Genome, Open Images V6, and GQA-200 show that Salience-SGG achieves state-of-the-art performance and improves existing Unbiased-SGG methods in their spatial understanding as demonstrated by the Pairwise Localization Average Precision