🤖 AI Summary
This work addresses a critical limitation in current human-object interaction (HOI) detection evaluation metrics—such as mean average precision (mAP)—which treat interaction categories as discrete labels and rely solely on exact lexical matching. Such an approach is ill-suited for open-vocabulary settings and fails to account for semantically correct predictions that differ in phrasing. To overcome this, the paper introduces SHOE, a semantic similarity–based evaluation framework for open-vocabulary HOI assessment. SHOE decouples interactions into verb and object components and leverages multiple large language models to compute their respective semantic similarities, which are then fused into a unified HOI score. Evaluated on benchmarks like HICO-DET, SHOE achieves 85.73% agreement with human judgments, substantially outperforming existing metrics and aligning more closely with human understanding of interactive relationships.
📝 Abstract
Open-vocabulary human-object interaction (HOI) detection is a step towards building scalable systems that generalize to unseen interactions in real-world scenarios and support grounded multimodal systems that reason about human-object relationships. However, standard evaluation metrics, such as mean Average Precision (mAP), treat HOI classes as discrete categorical labels and fail to credit semantically valid but lexically different predictions (e.g., "lean on couch" vs. "sit on couch"), limiting their applicability for evaluating open-vocabulary predictions that go beyond any predefined set of HOI labels. We introduce SHOE (Semantic HOI Open-Vocabulary Evaluation), a new evaluation framework that incorporates semantic similarity between predicted and ground-truth HOI labels. SHOE decomposes each HOI prediction into its verb and object components, estimates their semantic similarity using the average of multiple large language models (LLMs), and combines them into a similarity score to evaluate alignment beyond exact string match. This enables a flexible and scalable evaluation of both existing HOI detection methods and open-ended generative models using standard benchmarks such as HICO-DET. Experimental results show that SHOE scores align more closely with human judgments than existing metrics, including LLM-based and embedding-based baselines, achieving an agreement of 85.73% with the average human ratings. Our work underscores the need for semantically grounded HOI evaluation that better mirrors human understanding of interactions. We will release our evaluation metric to the public to facilitate future research.