🤖 AI Summary
To address the high computational cost and poor interpretability of Transformer models in environmental claim detection, this paper proposes a lightweight graph neural network (GNN) approach. Specifically, sentences are modeled as dependency syntactic graphs, with Word2Vec embeddings fused with edge-level syntactic features. Crucially, we introduce hyperbolic graph neural networks (HGNNs) for the first time in this task, leveraging the intrinsic hierarchical representational capacity of hyperbolic space to capture linguistic semantic hierarchies. Our method matches or surpasses state-of-the-art Transformer performance while reducing model parameters by 30×, significantly improving inference efficiency and decision transparency. Experimental results demonstrate HGNN’s distinct advantages in graph-structured modeling and interpretability, establishing a novel paradigm for environmental text analysis under low-resource constraints and high trustworthiness requirements.
📝 Abstract
Transformer-based models dominate NLP tasks like sentiment analysis, machine translation, and claim verification. However, their massive computational demands and lack of interpretability pose challenges for real-world applications requiring efficiency and transparency. In this work, we explore Graph Neural Networks (GNNs) and Hyperbolic Graph Neural Networks (HGNNs) as lightweight yet effective alternatives for Environmental Claim Detection, reframing it as a graph classification problem. We construct dependency parsing graphs to explicitly model syntactic structures, using simple word embeddings (word2vec) for node features with dependency relations encoded as edge features. Our results demonstrate that these graph-based models achieve comparable or superior performance to state-of-the-art transformers while using 30x fewer parameters. This efficiency highlights the potential of structured, interpretable, and computationally efficient graph-based approaches.