Graph Your Own Prompt

πŸ“… 2025-09-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Deep learning models often capture spurious inter-class similarities that contradict semantic meaning, leading to ambiguous and poorly discriminative feature representations. To address this, we propose Graph Consistency Regularization (GCR), a plug-and-play method requiring no architectural or training pipeline modifications. GCR constructs a global, class-aware graph from softmax predictions and aligns it with a batch-level feature similarity graph. An parameter-free Graph Consistency Layer enables multi-layer, cross-space graph structural matching and adaptive weight learning, guided by the prediction graph to refine feature embeddings. Our core innovation is a self-prompted, structure-driven feature semantic calibration mechanism. Extensive experiments demonstrate that GCR consistently improves intra-class compactness, inter-class separability, and generalization across diverse architectures and benchmarks, delivering stable performance gains without additional inference cost.

Technology Category

Application Category

πŸ“ Abstract
We propose Graph Consistency Regularization (GCR), a novel framework that injects relational graph structures, derived from model predictions, into the learning process to promote class-aware, semantically meaningful feature representations. Functioning as a form of self-prompting, GCR enables the model to refine its internal structure using its own outputs. While deep networks learn rich representations, these often capture noisy inter-class similarities that contradict the model's predicted semantics. GCR addresses this issue by introducing parameter-free Graph Consistency Layers (GCLs) at arbitrary depths. Each GCL builds a batch-level feature similarity graph and aligns it with a global, class-aware masked prediction graph, derived by modulating softmax prediction similarities with intra-class indicators. This alignment enforces that feature-level relationships reflect class-consistent prediction behavior, acting as a semantic regularizer throughout the network. Unlike prior work, GCR introduces a multi-layer, cross-space graph alignment mechanism with adaptive weighting, where layer importance is learned from graph discrepancy magnitudes. This allows the model to prioritize semantically reliable layers and suppress noisy ones, enhancing feature quality without modifying the architecture or training procedure. GCR is model-agnostic, lightweight, and improves semantic structure across various networks and datasets. Experiments show that GCR promotes cleaner feature structure, stronger intra-class cohesion, and improved generalization, offering a new perspective on learning from prediction structure. [Project website](https://darcyddx.github.io/gcr/) [Code](https://github.com/Darcyddx/graph-prompt)
Problem

Research questions and friction points this paper is trying to address.

Improving feature representation by aligning prediction graphs with feature similarity graphs
Addressing noisy inter-class similarities in deep network representations
Enhancing semantic structure through multi-layer graph consistency regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Consistency Regularization aligns feature and prediction graphs
Parameter-free Graph Consistency Layers enforce class-aware feature relationships
Multi-layer cross-space graph alignment with adaptive weighting prioritizes reliable layers
πŸ”Ž Similar Papers
No similar papers found.
X
Xi Ding
Griffith University, Australian National University
L
Lei Wang
Griffith University, Data61/CSIRO
Piotr Koniusz
Piotr Koniusz
Principal Scientist (Data61❀CSIRO). Hon./Adj. Associate Professor (level D) (ANU & UNSW).
Computer VisionMachine LearningRecognitionTensor and Kernel MethodsNeural Networks
Y
Yongsheng Gao
Griffith University