🤖 AI Summary
This work addresses the critical challenge of object-level cross-view correspondence between egocentric and exocentric perspectives. To tackle extreme viewpoint disparities, severe occlusions, and small-object localization, we propose the first end-to-end framework built upon SAM 2. Our method introduces a dual-memory bank system and a novel Memory-View Mixture-of-Experts (MoE) module that adaptively routes and fuses multi-view semantic features. Additionally, we incorporate a channel-spatial joint weighting mechanism and a long-term memory compression strategy to eliminate redundancy while preserving discriminative cross-frame information. Evaluated on the EgoExo4D benchmark, our approach achieves state-of-the-art performance—significantly outperforming both the SAM 2 baseline and existing methods—demonstrating superior generalization capability and robust long-video sequence modeling.
📝 Abstract
Establishing object-level correspondence between egocentric and exocentric views is essential for intelligent assistants to deliver precise and intuitive visual guidance. However, this task faces numerous challenges, including extreme viewpoint variations, occlusions, and the presence of small objects. Existing approaches usually borrow solutions from video object segmentation models, but still suffer from the aforementioned challenges. Recently, the Segment Anything Model 2 (SAM 2) has shown strong generalization capabilities and excellent performance in video object segmentation. Yet, when simply applied to the ego-exo correspondence (EEC) task, SAM 2 encounters severe difficulties due to ineffective ego-exo feature fusion and limited long-term memory capacity, especially for long videos. Addressing these problems, we propose a novel EEC framework based on SAM 2 with long-term memories by presenting a dual-memory architecture and an adaptive feature routing module inspired by Mixture-of-Experts (MoE). Compared to SAM 2, our approach features (i) a Memory-View MoE module which consists of a dual-branch routing mechanism to adaptively assign contribution weights to each expert feature along both channel and spatial dimensions, and (ii) a dual-memory bank system with a simple yet effective compression strategy to retain critical long-term information while eliminating redundancy. In the extensive experiments on the challenging EgoExo4D benchmark, our method, dubbed LM-EEC, achieves new state-of-the-art results and significantly outperforms existing methods and the SAM 2 baseline, showcasing its strong generalization across diverse scenarios. Our code and model are available at https://github.com/juneyeeHu/LM-EEC.