🤖 AI Summary
Open-vocabulary learning suffers from distribution estimation bias due to the coexistence of seen and unseen classes in open environments: existing methods model only seen-class data and cannot quantify the estimation error induced by missing unseen classes. This work theoretically establishes, for the first time, that generating unseen-class data yields a computable upper bound on distribution estimation error. Building upon this insight, we propose a hierarchical semantic tree- and domain-informed class- and domain-level generation mechanism, constructing a class- and domain-wise generative pipeline. Integrated with distribution alignment and posterior probability maximization, our approach enables bounded-error distribution modeling. Extensive experiments across 11 benchmark datasets demonstrate an average performance gain of 14%, significantly improving open-set generalization capability and distribution estimation reliability.
📝 Abstract
Open-vocabulary learning requires modeling the data distribution in open environments, which consists of both seen-class and unseen-class data.
Existing methods estimate the distribution in open environments using seen-class data, where the absence of unseen classes makes the estimation error inherently unidentifiable.
Intuitively, learning beyond the seen classes is crucial for distribution estimation to bound the estimation error.
We theoretically demonstrate that the distribution can be effectively estimated by generating unseen-class data, through which the estimation error is upper-bounded.
Building on this theoretical insight, we propose a novel open-vocabulary learning method, which generates unseen-class data for estimating the distribution in open environments. The method consists of a class-domain-wise data generation pipeline and a distribution alignment algorithm. The data generation pipeline generates unseen-class data under the guidance of a hierarchical semantic tree and domain information inferred from the seen-class data, facilitating accurate distribution estimation. With the generated data, the distribution alignment algorithm estimates and maximizes the posterior probability to enhance generalization in open-vocabulary learning. Extensive experiments on $11$ datasets demonstrate that our method outperforms baseline approaches by up to $14%$, highlighting its effectiveness and superiority.