TextureSAM: Towards a Texture Aware Foundation Model for Segmentation

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Segment Anything Model (SAM) excels in general-purpose segmentation but suffers from strong shape bias due to its reliance on large-scale semantic segmentation data, limiting its performance in texture-dominated domains—such as medical imaging, remote sensing, and material analysis—where object boundaries are primarily defined by textural variations rather than shape. To address this limitation, we propose TextureSAM, the first texture-aware foundation model for segmentation. Our approach introduces a multi-scale texture enhancement module, a texture-centric re-annotation strategy applied to ADE20K, and a progressive texture feature alignment fine-tuning paradigm to systematically mitigate SAM’s inherent shape bias. Extensive experiments demonstrate that TextureSAM consistently outperforms SAM-2 on texture-dominated benchmarks, achieving +0.2 mIoU on natural images and +0.18 mIoU on synthetic images. The code and the texture-enhanced dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Segment Anything Models (SAM) have achieved remarkable success in object segmentation tasks across diverse datasets. However, these models are predominantly trained on large-scale semantic segmentation datasets, which introduce a bias toward object shape rather than texture cues in the image. This limitation is critical in domains such as medical imaging, material classification, and remote sensing, where texture changes define object boundaries. In this study, we investigate SAM's bias toward semantics over textures and introduce a new texture-aware foundation model, TextureSAM, which performs superior segmentation in texture-dominant scenarios. To achieve this, we employ a novel fine-tuning approach that incorporates texture augmentation techniques, incrementally modifying training images to emphasize texture features. By leveraging a novel texture-alternation of the ADE20K dataset, we guide TextureSAM to prioritize texture-defined regions, thereby mitigating the inherent shape bias present in the original SAM model. Our extensive experiments demonstrate that TextureSAM significantly outperforms SAM-2 on both natural (+0.2 mIoU) and synthetic (+0.18 mIoU) texture-based segmentation datasets. The code and texture-augmented dataset will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

SAM models favor shape over texture cues
Texture changes define boundaries in key domains
TextureSAM improves segmentation in texture-dominant scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

TextureSAM introduces texture-aware segmentation foundation model
Employs texture augmentation for training image modification
Uses texture-altered ADE20K dataset to reduce shape bias
🔎 Similar Papers
No similar papers found.
I
Inbal Cohen
Tel Aviv University, Israel
B
Boaz Meivar
Tel Aviv University, Israel
P
Peihan Tu
University of Maryland, College Park, USA
S
Shai Avidan
Tel Aviv University, Israel
Gal Oren
Gal Oren
Visiting Scholar, Stanford | Assistant Professor of CS, Technion
Scientific ComputingArtificial IntelligenceHPC