Dynamic Scoring with Enhanced Semantics for Training-Free Human-Object Interaction Detection

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing HOI detection methods heavily rely on large-scale, manually annotated datasets, limiting their generalizability to novel domains and rare interaction scenarios. To address this, we propose DYSCO—a training-free framework that leverages vision-language models to enhance semantic representations, establishes a multimodal registration mechanism, and introduces transferable interaction signatures. It further employs adaptive multi-head attention to dynamically align and weight text–vision features. Our key contributions are: (1) zero-shot HOI modeling capability; (2) a dynamic scoring mechanism grounded in semantic priors; and (3) a lightweight, plug-and-play design enabling cross-scene generalization. Experiments demonstrate that DYSCO achieves state-of-the-art zero-shot performance on HICO-DET and V-COCO—surpassing fully supervised methods—and yields substantial gains on rare interaction subsets, matching supervised SOTA results.

Technology Category

Application Category

📝 Abstract
Human-Object Interaction (HOI) detection aims to identify humans and objects within images and interpret their interactions. Existing HOI methods rely heavily on large datasets with manual annotations to learn interactions from visual cues. These annotations are labor-intensive to create, prone to inconsistency, and limit scalability to new domains and rare interactions. We argue that recent advances in Vision-Language Models (VLMs) offer untapped potential, particularly in enhancing interaction representation. While prior work has injected such potential and even proposed training-free methods, there remain key gaps. Consequently, we propose a novel training-free HOI detection framework for Dynamic Scoring with enhanced semantics (DYSCO) that effectively utilizes textual and visual interaction representations within a multimodal registry, enabling robust and nuanced interaction understanding. This registry incorporates a small set of visual cues and uses innovative interaction signatures to improve the semantic alignment of verbs, facilitating effective generalization to rare interactions. Additionally, we propose a unique multi-head attention mechanism that adaptively weights the contributions of the visual and textual features. Experimental results demonstrate that our DYSCO surpasses training-free state-of-the-art models and is competitive with training-based approaches, particularly excelling in rare interactions. Code is available at https://github.com/francescotonini/dysco.
Problem

Research questions and friction points this paper is trying to address.

Detect human-object interactions without manual annotations
Enhance semantic alignment for rare interactions
Adaptively combine visual and textual features dynamically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free HOI detection with enhanced semantics
Multimodal registry for visual and textual representations
Adaptive multi-head attention for feature weighting
🔎 Similar Papers
No similar papers found.