🤖 AI Summary
This work addresses the limitation of conventional image segmentation—its heavy reliance on semantic categories, which hinders applicability to physical interaction tasks. We propose a category-agnostic segmentation paradigm grounded in Spelke objects: perceptual units defined by causal motion constraints and capacity for coordinated movement. To this end, we introduce SpelkeBench, the first benchmark dataset explicitly designed for Spelke-object segmentation. We further present SpelkeNet, a novel method that leverages a visual world model to predict future motion distributions, integrates motion affordance and expected displacement maps, and performs statistical counterfactual probing via virtual tactile interaction to identify motion-statistically coherent visual segments. Experiments demonstrate that SpelkeNet significantly outperforms supervised baselines—including SAM—on SpelkeBench, and consistently improves performance across diverse models on the 3DEditBench physical manipulation benchmark. Our approach establishes a new perception–action coupling paradigm for embodied intelligence.
📝 Abstract
Segments in computer vision are often defined by semantic considerations and are highly dependent on category-specific conventions. In contrast, developmental psychology suggests that humans perceive the world in terms of Spelke objects--groupings of physical things that reliably move together when acted on by physical forces. Spelke objects thus operate on category-agnostic causal motion relationships which potentially better support tasks like manipulation and planning. In this paper, we first benchmark the Spelke object concept, introducing the SpelkeBench dataset that contains a wide variety of well-defined Spelke segments in natural images. Next, to extract Spelke segments from images algorithmically, we build SpelkeNet, a class of visual world models trained to predict distributions over future motions. SpelkeNet supports estimation of two key concepts for Spelke object discovery: (1) the motion affordance map, identifying regions likely to move under a poke, and (2) the expected-displacement map, capturing how the rest of the scene will move. These concepts are used for "statistical counterfactual probing", where diverse "virtual pokes" are applied on regions of high motion-affordance, and the resultant expected displacement maps are used define Spelke segments as statistical aggregates of correlated motion statistics. We find that SpelkeNet outperforms supervised baselines like SegmentAnything (SAM) on SpelkeBench. Finally, we show that the Spelke concept is practically useful for downstream applications, yielding superior performance on the 3DEditBench benchmark for physical object manipulation when used in a variety of off-the-shelf object manipulation models.