Talk in Pieces, See in Whole: Disentangling and Hierarchical Aggregating Representations for Language-based Object Detection

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models exhibit weak fine-grained alignment and high false-positive rates when processing complex queries containing descriptive attributes and relative clauses, primarily due to the text encoder’s inability to disentangle target objects from their modifiers. To address this, we propose a sentence-level structural prior-driven framework for language representation disentanglement and hierarchical aggregation. Our method introduces, for the first time, a three-component disentanglement mechanism—object, attribute, and relation—and incorporates a hierarchical aggregation network to yield text representations with strong inductive bias and compositional modeling capability. We further design hierarchical synthetic caption data, disentanglement-aware subspace loss, and hierarchy-guided aggregation objectives. Evaluated on the OmniLabel benchmark, our approach achieves a 24% improvement in mean Average Precision (mAP), significantly suppressing false detections under complex queries.

Technology Category

Application Category

📝 Abstract
While vision-language models (VLMs) have made significant progress in multimodal perception (e.g., open-vocabulary object detection) with simple language queries, state-of-the-art VLMs still show limited ability to perceive complex queries involving descriptive attributes and relational clauses. Our in-depth analysis shows that these limitations mainly stem from text encoders in VLMs. Such text encoders behave like bags-of-words and fail to separate target objects from their descriptive attributes and relations in complex queries, resulting in frequent false positives. To address this, we propose restructuring linguistic representations according to the hierarchical relations within sentences for language-based object detection. A key insight is the necessity of disentangling textual tokens into core components-objects, attributes, and relations ("talk in pieces")-and subsequently aggregating them into hierarchically structured sentence-level representations ("see in whole"). Building on this principle, we introduce the TaSe framework with three main contributions: (1) a hierarchical synthetic captioning dataset spanning three tiers from category names to descriptive sentences; (2) Talk in Pieces, the three-component disentanglement module guided by a novel disentanglement loss function, transforms text embeddings into subspace compositions; and (3) See in Whole, which learns to aggregate disentangled components into hierarchically structured embeddings with the guide of proposed hierarchical objectives. The proposed TaSe framework strengthens the inductive bias of hierarchical linguistic structures, resulting in fine-grained multimodal representations for language-based object detection. Experimental results under the OmniLabel benchmark show a 24% performance improvement, demonstrating the importance of linguistic compositionality.
Problem

Research questions and friction points this paper is trying to address.

VLMs struggle with complex queries involving attributes and relations
Text encoders fail to separate objects from descriptive components
Need hierarchical linguistic representations for object detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangling text embeddings into object, attribute, relation components
Hierarchically aggregating components into structured sentence representations
Using synthetic captioning dataset and novel loss for fine-grained detection
🔎 Similar Papers
No similar papers found.
S
Sojung An
Korea University
Kwanyong Park
Kwanyong Park
Assistant Professor, University of Seoul
Computer visionMachine learning
Y
Yong Jae Lee
University of Wisconsin-Madison
D
Donghyun Kim
Korea University