H4G: Unlocking Faithful Inference for Zero-Shot Graph Learning in Hyperbolic Space

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing zero-shot graph learning methods suffer from limited performance in fine-grained pattern recognition on heterogeneous graphs, primarily due to excessive hyperbolic embedding radius—causing “over-abstraction” where multi-scale structural information is collapsed into a single high-level representation, thereby discarding critical local patterns. This work is the first to identify and address the over-abstraction problem in hyperbolic graph learning. We propose a radius-controllable, multi-scale embedding paradigm: a learnable block-diagonal scaling matrix and Möbius matrix multiplication jointly enable dynamic adjustment of the hyperbolic embedding radius, preserving local details while retaining global receptive fields. Integrated with a text-graph alignment mechanism, our model achieves +12.8% accuracy on heterophilic and +8.4% on homophilic zero-shot graph learning benchmarks—substantially outperforming state-of-the-art methods. These results empirically validate both the effectiveness and necessity of explicit radius control for modeling multi-scale structural representations in hyperbolic space.

Technology Category

Application Category

📝 Abstract
Text-attributed graphs are widely used across domains, offering rich opportunities for zero-shot learning via graph-text alignment. However, existing methods struggle with tasks requiring fine-grained pattern recognition, particularly on heterophilic graphs. Through empirical and theoretical analysis, we identify an extbf{over-abstraction problem}: current approaches operate at excessively large hyperbolic radii, compressing multi-scale structural information into uniform high-level abstractions. This abstraction-induced information loss obscures critical local patterns essential for accurate predictions. By analyzing embeddings in hyperbolic space, we demonstrate that optimal graph learning requires extbf{faithful preservation} of fine-grained structural details, better retained by representations positioned closer to the origin. To address this, we propose extbf{H4G}, a framework that systematically reduces embedding radii using learnable block-diagonal scaling matrices and Möbius matrix multiplication. This approach restores access to fine-grained patterns while maintaining global receptive ability with minimal computational overhead. Experiments show H4G achieves state-of-the-art zero-shot performance with extbf{12.8%} improvement on heterophilic graphs and extbf{8.4%} on homophilic graphs, confirming that radius reduction enables faithful multi-scale representation for advancing zero-shot graph learning.
Problem

Research questions and friction points this paper is trying to address.

Addresses over-abstraction in hyperbolic graph embeddings for zero-shot learning
Preserves fine-grained structural patterns by reducing hyperbolic radii
Improves performance on heterophilic and homophilic graphs through faithful representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reduces embedding radii via learnable scaling matrices
Uses Möbius multiplication for fine-grained pattern restoration
Maintains global receptive ability with low computational overhead
🔎 Similar Papers
No similar papers found.
H
Heng Zhang
South China Normal University
T
Tianyi Zhang
Uber Technologies Inc.
Zijun Liu
Zijun Liu
Tsinghua University
LLMAgentMachine TranslationAIGC
Y
Yuling Shi
Shanghai Jiao Tong University
Y
Yaomin Shen
Nanchang Research Institute, Zhejiang University
Haochen You
Haochen You
Columbia University
Generative AIMachine LearningStatistics
H
Haichuan Hu
Alibaba Cloud
L
Lubin Gan
University of Science and Technology of China
J
Jin Huang
South China Normal University