🤖 AI Summary
This work addresses object-state classification under zero-shot learning (ZSL), proposing the first object-agnostic state classification framework—capable of directly inferring an object’s state without recognizing or relying on its category identity. Methodologically, we construct a state semantic knowledge graph (KG) to model cross-object state relationships, then jointly embed visual features and KG representations within an end-to-end zero-shot transfer learning architecture. Theoretically and empirically, we demonstrate that object category information is not necessary for accurate state prediction. Our paradigm significantly enhances zero-shot generalization, substantially outperforming existing object-attribute classification methods across multiple benchmark datasets. Results validate both the effectiveness and robustness of cross-object state reasoning, establishing a new foundation for category-agnostic state understanding in ZSL.
📝 Abstract
We investigate the problem of Object State Classification (OSC) as a zero-shot learning problem. Specifically, we propose the first Object-agnostic State Classification (OaSC) method that infers the state of a certain object without relying on the knowledge or the estimation of the object class. In that direction, we capitalize on Knowledge Graphs (KGs) for structuring and organizing knowledge, which, in combination with visual information, enable the inference of the states of objects in object/state pairs that have not been encountered in the method's training set. A series of experiments investigate the performance of the proposed method in various settings, against several hypotheses and in comparison with state of the art approaches for object attribute classification. The experimental results demonstrate that the knowledge of an object class is not decisive for the prediction of its state. Moreover, the proposed OaSC method outperforms existing methods in all datasets and benchmarks by a great margin.