🤖 AI Summary
In fine-grained zero-shot learning (FG-ZSL), conventional models implicitly couple semantic attributes—such as color, shape, and texture—into a single visual embedding, impairing discriminability. To address this, we propose the Attribute-Centric Representation (ACR) framework, which intrinsically disentangles multiple attributes during feature extraction—achieving explicit attribute separation at the representation learning level for the first time. Aiming to capture both local part semantics and global attribute specificity, we design a two-level dynamic routing mechanism: (i) Patch-level Mixture of Patch Experts (MoPE) for local part-aware modeling, and (ii) Attribute-level Mixture of Attribute Experts (MoAE) for sparse attribute mapping. Additionally, we introduce part-aware feature projection and an MoE-enhanced Vision Transformer (ViT) architecture. Extensive experiments on CUB, AwA2, and SUN benchmarks establish new state-of-the-art performance, significantly outperforming post-hoc disentanglement methods and demonstrating that intrinsic attribute disentanglement critically enhances fine-grained generalization.
📝 Abstract
Recognizing unseen fine-grained categories demands a model that can distinguish subtle visual differences. This is typically achieved by transferring visual-attribute relationships from seen classes to unseen classes. The core challenge is attribute entanglement, where conventional models collapse distinct attributes like color, shape, and texture into a single visual embedding. This causes interference that masks these critical distinctions. The post-hoc solutions of previous work are insufficient, as they operate on representations that are already mixed. We propose a zero-shot learning framework that learns AttributeCentric Representations (ACR) to tackle this problem by imposing attribute disentanglement during representation learning. ACR is achieved with two mixture-of-experts components, including Mixture of Patch Experts (MoPE) and Mixture of Attribute Experts (MoAE). First, MoPE is inserted into the transformer using a dual-level routing mechanism to conditionally dispatch image patches to specialized experts. This ensures coherent attribute families are processed by dedicated experts. Finally, the MoAE head projects these expert-refined features into sparse, partaware attribute maps for robust zero-shot classification. On zero-shot learning benchmark datasets CUB, AwA2, and SUN, our ACR achieves consistent state-of-the-art results.