LLM-Guided Dynamic-UMAP for Personalized Federated Graph Learning

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses personalized federated graph learning under sparse graphs, few-shot settings, and low-resource constraints. Methodologically, it introduces a novel framework integrating large language models (LLMs) with federated graph learning: (1) leveraging LLMs to generate few-shot reasoning signals and augment graph data via prompt tuning; (2) designing dynamic UMAP-based manifold modeling to capture client-specific graph embeddings; (3) incorporating cross-modal regularization to align LLM latent representations with structural graph embeddings; and (4) proposing a variational aggregation mechanism to ensure convergence, augmented with differential privacy via the Moments Accountant. Experiments on knowledge graph completion, recommendation systems, and citation networks demonstrate significant improvements in node classification and link prediction performance. The framework achieves strong personalization capability, generalizability across heterogeneous clients, and rigorous privacy preservation—without compromising model utility.

Technology Category

Application Category

📝 Abstract
We propose a method that uses large language models to assist graph machine learning under personalization and privacy constraints. The approach combines data augmentation for sparse graphs, prompt and instruction tuning to adapt foundation models to graph tasks, and in-context learning to supply few-shot graph reasoning signals. These signals parameterize a Dynamic UMAP manifold of client-specific graph embeddings inside a Bayesian variational objective for personalized federated learning. The method supports node classification and link prediction in low-resource settings and aligns language model latent representations with graph structure via a cross-modal regularizer. We outline a convergence argument for the variational aggregation procedure, describe a differential privacy threat model based on a moments accountant, and present applications to knowledge graph completion, recommendation-style link prediction, and citation and product graphs. We also discuss evaluation considerations for benchmarking LLM-assisted graph machine learning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing graph machine learning under privacy constraints
Adapting foundation models for graph reasoning tasks
Supporting node classification in low-resource settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs guide graph learning with data augmentation
Prompt tuning adapts foundation models to graph tasks
Dynamic UMAP parameterizes embeddings for federated learning
🔎 Similar Papers
No similar papers found.