H3M-SSMoEs: Hypergraph-based Multimodal Learning with LLM Reasoning and Style-Structured Mixture of Experts

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Stock price forecasting faces challenges including complex temporal dependencies, heterogeneous modalities, and dynamically evolving inter-stock relationships, making it difficult for existing methods to jointly model structural, semantic, and market-state information within a scalable framework. To address this, we propose a scalable hypergraph neural network framework: (1) a multi-context hypergraph architecture explicitly capturing both local and global dynamic inter-stock relationships; (2) an LLM-enhanced cross-modal semantic alignment module leveraging frozen large language models with lightweight adapters; and (3) a style-vector-driven sparse mixture-of-experts system enabling market-regime-aware adaptation and industry-specific modeling. Evaluated on three major stock markets, our method significantly outperforms state-of-the-art approaches in prediction accuracy and investment returns, while maintaining robust risk control. The code and models are publicly available.

Technology Category

Application Category

📝 Abstract
Stock movement prediction remains fundamentally challenging due to complex temporal dependencies, heterogeneous modalities, and dynamically evolving inter-stock relationships. Existing approaches often fail to unify structural, semantic, and regime-adaptive modeling within a scalable framework. This work introduces H3M-SSMoEs, a novel Hypergraph-based MultiModal architecture with LLM reasoning and Style-Structured Mixture of Experts, integrating three key innovations: (1) a Multi-Context Multimodal Hypergraph that hierarchically captures fine-grained spatiotemporal dynamics via a Local Context Hypergraph (LCH) and persistent inter-stock dependencies through a Global Context Hypergraph (GCH), employing shared cross-modal hyperedges and Jensen-Shannon Divergence weighting mechanism for adaptive relational learning and cross-modal alignment; (2) a LLM-enhanced reasoning module, which leverages a frozen large language model with lightweight adapters to semantically fuse and align quantitative and textual modalities, enriching representations with domain-specific financial knowledge; and (3) a Style-Structured Mixture of Experts (SSMoEs) that combines shared market experts and industry-specialized experts, each parameterized by learnable style vectors enabling regime-aware specialization under sparse activation. Extensive experiments on three major stock markets demonstrate that H3M-SSMoEs surpasses state-of-the-art methods in both superior predictive accuracy and investment performance, while exhibiting effective risk control. Datasets, source code, and model weights are available at our GitHub repository: https://github.com/PeilinTime/H3M-SSMoEs.
Problem

Research questions and friction points this paper is trying to address.

Predicting stock movements with complex temporal dependencies
Integrating heterogeneous modalities and dynamic inter-stock relationships
Unifying structural, semantic, and regime-adaptive modeling in scalable framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical hypergraph captures spatiotemporal dynamics and dependencies
LLM with adapters aligns quantitative and textual modalities
Mixture of experts combines shared and specialized market modeling
🔎 Similar Papers
No similar papers found.
P
Peilin Tan
University of California, San Diego, La Jolla, CA, USA
Liang Xie
Liang Xie
Wuhan University of Technology
Time Series ForecastingCross-modal Learning
C
Churan Zhi
University of California, San Diego, La Jolla, CA, USA
D
Dian Tu
University of California, San Diego, La Jolla, CA, USA
C
Chuanqi Shi
University of California, San Diego, La Jolla, CA, USA