🤖 AI Summary
Existing zero-shot multimodal information extraction methods struggle to handle real-world scenarios where seen and unseen categories coexist, and Euclidean space fails to effectively capture the hierarchical semantic relationships between instances and category prototypes. To address these limitations, this work introduces hyperbolic space into generalized zero-shot multimodal information extraction for the first time, proposing a multimodal generative representation learning framework. By integrating a variational information bottleneck with an autoencoder, the model explicitly captures multi-level semantic structures. Furthermore, a semantic similarity distribution alignment loss is designed to bridge the semantic gap between seen and unseen categories. Experimental results demonstrate that the proposed method significantly outperforms current baselines on two benchmark datasets, confirming its superior generalization capability under the generalized zero-shot setting.
📝 Abstract
Multimodal information extraction (MIE) constitutes a set of essential tasks aimed at extracting structural information from Web texts with integrating images, to facilitate the structural construction of Web-based semantic knowledge. To address the expanding category set including newly emerging entity types or relations on websites, prior research proposed the zero-shot MIE (ZS-MIE) task which aims to extract unseen structural knowledge with textual and visual modalities. However, the ZS-MIE models are limited to recognizing the samples that fall within the unseen category set, and they struggle to deal with real-world scenarios that encompass both seen and unseen categories. The shortcomings of existing methods can be ascribed to two main aspects. On one hand, these methods construct representations of samples and categories within Euclidean space, failing to capture the hierarchical semantic relationships between the two modalities within a sample and their corresponding category prototypes. On the other hand, there is a notable gap in the distribution of semantic similarity between seen and unseen category sets, which impacts the generative capability of the ZS-MIE models. To overcome the disadvantages, we delve into the generalized zero-shot MIE (GZS-MIE) task and propose the hyperbolic multimodal generative representation learning framework (HMGRL). The variational information bottleneck and autoencoder networks are reconstructed with hyperbolic space for modeling the multi-level hierarchical semantic correlations among samples and prototypes. Furthermore, the proposed model is trained with the unseen samples generated by the decoder, and we introduce the semantic similarity distribution alignment loss to enhance the model's generalization performance. Experimental evaluations on two benchmark datasets underscore the superiority of HMGRL compared to existing baseline methods.