Convolutional Set Transformer

πŸ“… 2025-09-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing set modeling approaches (e.g., Deep Sets, Set Transformer) accept only vectorized inputs, necessitating pre-extraction of features from 3D image collections via CNNsβ€”thus decoupling feature learning from relational modeling. To address this, we propose the Convolutional Set Transformer (CST), the first architecture to deeply integrate convolutional operations with the Set Transformer, enabling direct processing of raw 3D image tensors. CST jointly performs hierarchical feature extraction and set-level contextual modeling in an end-to-end trainable framework. It supports gradient-based interpretability (e.g., Grad-CAM) and exhibits strong cross-task transferability. Empirically, CST significantly outperforms cascaded baselines on image-set classification and anomaly detection. We release CST-15, a pretrained model, establishing a new paradigm for understanding image collections characterized by high-level semantic consistency yet substantial visual heterogeneity.

Technology Category

Application Category

πŸ“ Abstract
We introduce the Convolutional Set Transformer (CST), a novel neural architecture designed to process image sets of arbitrary cardinality that are visually heterogeneous yet share high-level semantics - such as a common category, scene, or concept. Existing set-input networks, e.g., Deep Sets and Set Transformer, are limited to vector inputs and cannot directly handle 3D image tensors. As a result, they must be cascaded with a feature extractor, typically a CNN, which encodes images into embeddings before the set-input network can model inter-image relationships. In contrast, CST operates directly on 3D image tensors, performing feature extraction and contextual modeling simultaneously, thereby enabling synergies between the two processes. This design yields superior performance in tasks such as Set Classification and Set Anomaly Detection and further provides native compatibility with CNN explainability methods such as Grad-CAM, unlike competing approaches that remain opaque. Finally, we show that CSTs can be pre-trained on large-scale datasets and subsequently adapted to new domains and tasks through standard Transfer Learning schemes. To support further research, we release CST-15, a CST backbone pre-trained on ImageNet (https://github.com/chinefed/convolutional-set-transformer).
Problem

Research questions and friction points this paper is trying to address.

Processes image sets with visual heterogeneity but shared semantics
Eliminates need for separate feature extraction before set modeling
Enables simultaneous feature extraction and contextual relationship modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Processes image sets of arbitrary cardinality directly
Simultaneously performs feature extraction and contextual modeling
Enables native compatibility with CNN explainability methods
πŸ”Ž Similar Papers
No similar papers found.