🤖 AI Summary
Text classification in complex semantic contexts suffers from insufficient multi-scale feature fusion and weak modeling of latent logical dependencies. Method: This paper proposes a large language model (LLM)-driven multi-scale–graph collaborative modeling framework. First, an LLM performs fine-grained contextual encoding. Second, a semantic feature pyramid is constructed to fuse representations across word-, phrase-, and sentence-level granularities. Third, the fused features are mapped onto a heterogeneous text graph, where a graph neural network (GNN) captures cross-segment semantic associations and logical dependencies. Contribution/Results: The framework unifies deep semantic representation learning with structured relational reasoning. Extensive experiments on multiple benchmark datasets demonstrate significant improvements over state-of-the-art methods across accuracy (ACC), F1-score, AUC, and precision—validating both effectiveness and robustness.
📝 Abstract
This study investigates a hybrid method for text classification that integrates deep feature extraction from large language models, multi-scale fusion through feature pyramids, and structured modeling with graph neural networks to enhance performance in complex semantic contexts. First, the large language model captures contextual dependencies and deep semantic representations of the input text, providing a rich feature foundation for subsequent modeling. Then, based on multi-level feature representations, the feature pyramid mechanism effectively integrates semantic features of different scales, balancing global information and local details to construct hierarchical semantic expressions. Furthermore, the fused features are transformed into graph representations, and graph neural networks are employed to capture latent semantic relations and logical dependencies in the text, enabling comprehensive modeling of complex interactions among semantic units. On this basis, the readout and classification modules generate the final category predictions. The proposed method demonstrates significant advantages in robustness alignment experiments, outperforming existing models on ACC, F1-Score, AUC, and Precision, which verifies the effectiveness and stability of the framework. This study not only constructs an integrated framework that balances global and local information as well as semantics and structure, but also provides a new perspective for multi-scale feature fusion and structured semantic modeling in text classification tasks.