🤖 AI Summary
Existing zero-shot graph learning methods suffer from limited performance in fine-grained pattern recognition on heterogeneous graphs, primarily due to excessive hyperbolic embedding radius—causing “over-abstraction” where multi-scale structural information is collapsed into a single high-level representation, thereby discarding critical local patterns. This work is the first to identify and address the over-abstraction problem in hyperbolic graph learning. We propose a radius-controllable, multi-scale embedding paradigm: a learnable block-diagonal scaling matrix and Möbius matrix multiplication jointly enable dynamic adjustment of the hyperbolic embedding radius, preserving local details while retaining global receptive fields. Integrated with a text-graph alignment mechanism, our model achieves +12.8% accuracy on heterophilic and +8.4% on homophilic zero-shot graph learning benchmarks—substantially outperforming state-of-the-art methods. These results empirically validate both the effectiveness and necessity of explicit radius control for modeling multi-scale structural representations in hyperbolic space.
📝 Abstract
Text-attributed graphs are widely used across domains, offering rich opportunities for zero-shot learning via graph-text alignment. However, existing methods struggle with tasks requiring fine-grained pattern recognition, particularly on heterophilic graphs. Through empirical and theoretical analysis, we identify an extbf{over-abstraction problem}: current approaches operate at excessively large hyperbolic radii, compressing multi-scale structural information into uniform high-level abstractions. This abstraction-induced information loss obscures critical local patterns essential for accurate predictions. By analyzing embeddings in hyperbolic space, we demonstrate that optimal graph learning requires extbf{faithful preservation} of fine-grained structural details, better retained by representations positioned closer to the origin. To address this, we propose extbf{H4G}, a framework that systematically reduces embedding radii using learnable block-diagonal scaling matrices and Möbius matrix multiplication. This approach restores access to fine-grained patterns while maintaining global receptive ability with minimal computational overhead. Experiments show H4G achieves state-of-the-art zero-shot performance with extbf{12.8%} improvement on heterophilic graphs and extbf{8.4%} on homophilic graphs, confirming that radius reduction enables faithful multi-scale representation for advancing zero-shot graph learning.