๐ค AI Summary
To address weak multi-task collaboration, poor concept reusability, and limited cross-task transfer in first-person video understanding, this paper proposes Hier-EgoPackโa unified framework that models tasks as portable, composable โconcept backpacks.โ Its core innovations include hierarchical task disentanglement, contrastive concept alignment, and meta-perspective distillation, enabling joint modeling of action recognition, object interaction understanding, and future event prediction. The framework integrates self-supervised pretraining with task-aware adapters to support dynamic concept-level knowledge reuse and incremental expansion across heterogeneous tasks. Evaluated on EPIC-Kitchens and Ego4D benchmarks, Hier-EgoPack achieves a 7.2% average accuracy gain across tasks and accelerates cold-start training for novel tasks by 3.1ร, significantly improving multi-task generalization and knowledge transferability.
๐ Abstract
Our comprehension of video streams depicting human activities is naturally multifaceted: in just a few moments, we can grasp what is happening, identify the relevance and interactions of objects in the scene, and forecast what will happen soon, everything all at once. To endow autonomous systems with such holistic perception, learning how to correlate concepts, abstract knowledge across diverse tasks, and leverage tasks synergies when learning novel skills is essential. In this paper, we introduce Hier-EgoPack, a unified framework able to create a collection of task perspectives that can be carried across downstream tasks and used as a potential source of additional insights, as a backpack of skills that a robot can carry around and use when needed.