GAPNet: Plug-in Jointly Learning Task-Specific Graph for Dynamic Stock Relation

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes GAPNet, a plug-and-play graph adaptation network designed to overcome the limitations of traditional methods that rely on predefined stock relationship graphs and struggle with high noise, asynchronicity, and task misalignment in financial data. GAPNet employs a spatio-temporal aware dual-component architecture to jointly learn task-specific graph structures and node representations in an end-to-end manner, dynamically rewiring edges to capture both short-term co-movements and long-term dependencies among assets. The framework is compatible with existing pairwise graph or hypergraph backbone models and adaptively optimizes topology under distribution shifts, significantly enhancing task alignment and model robustness. Evaluated on two real-world stock datasets, GAPNet achieves annualized returns of 0.47 (with RT-GCN) and 0.63 (with CI-STHPAN), along with peak Sharpe ratios of 2.20 and 2.12, respectively.

Technology Category

Application Category

📝 Abstract
The advent of the web has led to a paradigm shift in the financial relations, with the real-time dissemination of news, social discourse, and financial filings contributing significantly to the reshaping of financial forecasting. The existing methods rely on establishing relations a priori, i.e. predefining graphs to capture inter-stock relationships. However, the stock-related web signals are characterised by high levels of noise, asynchrony, and challenging to obtain, resulting in poor generalisability and non-alignment between the predefined graphs and the downstream tasks. To address this, we propose GAPNet, a Graph Adaptation Plug-in Network that jointly learns task-specific topology and representations in an end-to-end manner. GAPNet attaches to existing pairwise graph or hypergraph backbone models, enabling the dynamic adaptation and rewiring of edge topologies via two complementary components: a Spatial Perception Layer that captures short-term co-movements across assets, and a Temporal Perception Layer that maintains long-term dependency under distribution shift. Across two real-world stock datasets, GAPNet has been shown to consistently enhance the profitability and stability in comparision to the state-of-the-art models, yielding annualised cumulative returns of up to 0.47 for RT-GCN and 0.63 for CI-STHPAN, with peak Sharpe Ratio of 2.20 and 2.12 respectively. The plug-and-play design of GAPNet ensures its broad applicability to diverse GNN-based architectures. Our results underscore that jointly learning graph structures and representations is essential for task-specific relational modeling.
Problem

Research questions and friction points this paper is trying to address.

stock relation
predefined graph
financial forecasting
graph generalisability
task alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

graph adaptation
joint learning
dynamic stock relations
plug-and-play GNN
temporal-spatial perception
🔎 Similar Papers
No similar papers found.
Y
Yingjie Niu
University College Dublin, Dublin, Ireland
L
Lanxin Lu
University College Dublin, Dublin, Ireland
C
Changhong Jin
University College Dublin, Dublin, Ireland
Ruihai Dong
Ruihai Dong
UCD
Machine LearningDeep LearningRecommender SystemsNLPCase-based Reasoning