ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) suffer from data hunger and foreground-background bias, limiting their robustness and generalization in large-scale image classification. To address this, we propose ForAug—a novel data augmentation method that explicitly decouples and controllably recombines foreground and background components using foundation models (CLIP and Segment Anything), thereby shifting architectural inductive biases to the data level. We further introduce ForNet, an end-to-end framework tailored for ViT training with ForAug. We define and employ quantitative metrics—including background robustness, foreground focus score, and center/size bias—to systematically mitigate visual biases. On ImageNet, ForAug boosts ViT top-1 accuracy by up to 4.5 percentage points; across diverse downstream tasks, average performance improves by 7.3 points. The method significantly enhances model fairness and out-of-distribution generalization. Code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Transformers, particularly Vision Transformers (ViTs), have achieved state-of-the-art performance in large-scale image classification. However, they often require large amounts of data and can exhibit biases that limit their robustness and generalizability. This paper introduces ForAug, a novel data augmentation scheme that addresses these challenges and explicitly includes inductive biases, which commonly are part of the neural network architecture, into the training data. ForAug is constructed by using pretrained foundation models to separate and recombine foreground objects with different backgrounds, enabling fine-grained control over image composition during training. It thus increases the data diversity and effective number of training samples. We demonstrate that training on ForNet, the application of ForAug to ImageNet, significantly improves the accuracy of ViTs and other architectures by up to 4.5 percentage points (p.p.) on ImageNet and 7.3 p.p. on downstream tasks. Importantly, ForAug enables novel ways of analyzing model behavior and quantifying biases. Namely, we introduce metrics for background robustness, foreground focus, center bias, and size bias and show that training on ForNet substantially reduces these biases compared to training on ImageNet. In summary, ForAug provides a valuable tool for analyzing and mitigating biases, enabling the development of more robust and reliable computer vision models. Our code and dataset are publicly available at https://github.com/tobna/ForAug.
Problem

Research questions and friction points this paper is trying to address.

Mitigates biases in Vision Transformers to enhance robustness and generalizability.
Introduces ForAug, a data augmentation method to increase training data diversity.
Develops metrics to analyze and reduce biases in computer vision models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

ForAug recombines foregrounds and backgrounds
Uses pretrained models for image composition control
Introduces metrics to quantify and reduce biases
🔎 Similar Papers
No similar papers found.