Multi-Scale Fusion for Object Representation

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing VAE-based object-centric learning (OCL) methods suffer from scale imbalance due to single-scale reconstruction, severely limiting their capacity to model objects of varying sizes—particularly small or large ones. To address this, we propose the first OCL framework integrating an image pyramid with explicit inter-scale and intra-scale fusion mechanisms, enabling scale-adaptive slot optimization and overcoming the inherent scale bias in conventional VAE reconstruction. Our method leverages multi-scale feature extraction and hierarchical information aggregation to significantly enhance each slot’s perception and reconstruction capability across object scales. Extensive experiments on standard OCL benchmarks demonstrate consistent and substantial improvements over state-of-the-art VAE-based and diffusion-based baselines. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Representing images or videos as object-level feature vectors, rather than pixel-level feature maps, facilitates advanced visual tasks. Object-Centric Learning (OCL) primarily achieves this by reconstructing the input under the guidance of Variational Autoencoder (VAE) intermediate representation to drive so-called extit{slots} to aggregate as much object information as possible. However, existing VAE guidance does not explicitly address that objects can vary in pixel sizes while models typically excel at specific pattern scales. We propose extit{Multi-Scale Fusion} (MSF) to enhance VAE guidance for OCL training. To ensure objects of all sizes fall within VAE's comfort zone, we adopt the extit{image pyramid}, which produces intermediate representations at multiple scales; To foster scale-invariance/variance in object super-pixels, we devise extit{inter}/ extit{intra-scale fusion}, which augments low-quality object super-pixels of one scale with corresponding high-quality super-pixels from another scale. On standard OCL benchmarks, our technique improves mainstream methods, including state-of-the-art diffusion-based ones. The source code is available on https://github.com/Genera1Z/MultiScaleFusion.
Problem

Research questions and friction points this paper is trying to address.

Enhances object representation in images
Addresses object size variability in models
Improves scale-invariance in object super-pixels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Scale Fusion enhances OCL
Image pyramid for varied object sizes
Inter/intra-scale fusion improves super-pixels