🤖 AI Summary
To address the challenges of sparse graph structures, strong noise interference, and the inability of static modeling to capture dynamic relational evolution in social media personality detection, this paper proposes an LLM-driven self-supervised dynamic graph optimization framework. The method leverages large language models to enhance semantic understanding, enabling adaptive node and edge addition/deletion. It employs end-to-end multi-task training via joint graph reconstruction, link prediction, and contrastive learning. Unlike conventional static graph approaches, our framework explicitly models the temporal evolution of personality traits across user interactions. Experiments on the Kaggle and Pandora datasets demonstrate that the proposed method significantly outperforms state-of-the-art baselines across all five personality dimensions, achieving an average accuracy improvement of 4.2%. Moreover, it exhibits superior robustness to both structural sparsity and noisy input data.
📝 Abstract
Graph-based personality detection constructs graph structures from textual data, particularly social media posts. Current methods often struggle with sparse or noisy data and rely on static graphs, limiting their ability to capture dynamic changes between nodes and relationships. This paper introduces LL4G, a self-supervised framework leveraging large language models (LLMs) to optimize graph neural networks (GNNs). LLMs extract rich semantic features to generate node representations and to infer explicit and implicit relationships. The graph structure adaptively adds nodes and edges based on input data, continuously optimizing itself. The GNN then uses these optimized representations for joint training on node reconstruction, edge prediction, and contrastive learning tasks. This integration of semantic and structural information generates robust personality profiles. Experimental results on Kaggle and Pandora datasets show LL4G outperforms state-of-the-art models.