🤖 AI Summary
In HOI (Human-Object Interaction) tasks, the long-standing decoupled modeling of detection and generation impedes holistic interaction understanding and generalization. To address this, we propose the first unified HOI paradigm: (1) a shared token space enabling bidirectional semantic mapping between detection and generation; (2) a symmetric interaction-aware attention module that explicitly models mutual human–object dependencies; and (3) a unified semi-supervised learning framework jointly optimizing both tasks under limited annotations. Our approach deeply fuses visual and semantic representations without task-specific heads or dedicated decoders. Experiments demonstrate substantial gains: +4.9 mAP on long-tailed HOI detection and +42.0 Recall@100 on open-vocabulary interaction generation—significantly surpassing current state-of-the-art methods.
📝 Abstract
In the field of human-object interaction (HOI), detection and generation are two dual tasks that have traditionally been addressed separately, hindering the development of comprehensive interaction understanding. To address this, we propose UniHOI, which jointly models HOI detection and generation via a unified token space, thereby effectively promoting knowledge sharing and enhancing generalization. Specifically, we introduce a symmetric interaction-aware attention module and a unified semi-supervised learning paradigm, enabling effective bidirectional mapping between images and interaction semantics even under limited annotations. Extensive experiments demonstrate that UniHOI achieves state-of-the-art performance in both HOI detection and generation. Specifically, UniHOI improves accuracy by 4.9% on long-tailed HOI detection and boosts interaction metrics by 42.0% on open-vocabulary generation tasks.