🤖 AI Summary
Existing language models predominantly employ Euclidean embeddings, which struggle to capture the intrinsic hierarchical structure of language—particularly under complex reasoning tasks. To address this, we propose HiM, the first model integrating selective state-space modeling (Mamba2) with adaptive hyperbolic geometry (Poincaré ball and Lorentz manifold). HiM introduces learnable curvature parameters and a hybrid hyperbolic loss to enable hierarchy-aware, multi-granular, and long-range robust linguistic representations. Methodologically, it innovatively combines tangent-space projection, cosine-sine manifold embedding, and hyperbolic distance optimization. Evaluated on four diverse language and biomedical ontology datasets, HiM consistently outperforms Euclidean baselines: HiM-Poincaré enhances fine-grained semantic discrimination, while HiM-Lorentz yields more compact embeddings with superior hierarchical preservation. These results demonstrate that adaptive hyperbolic geometry, synergized with modern state-space architectures, substantially advances structured linguistic representation learning.
📝 Abstract
Selective state-space models have achieved great success in long-sequence modeling. However, their capacity for language representation, especially in complex hierarchical reasoning tasks, remains underexplored. Most large language models rely on flat Euclidean embeddings, limiting their ability to capture latent hierarchies. To address this limitation, we propose Hierarchical Mamba (HiM), integrating efficient Mamba2 with exponential growth and curved nature of hyperbolic geometry to learn hierarchy-aware language embeddings for deeper linguistic understanding. Mamba2-processed sequences are projected to the Poincare ball (via tangent-based mapping) or Lorentzian manifold (via cosine and sine-based mapping) with"learnable"curvature, optimized with a combined hyperbolic loss. Our HiM model facilitates the capture of relational distances across varying hierarchical levels, enabling effective long-range reasoning. This makes it well-suited for tasks like mixed-hop prediction and multi-hop inference in hierarchical classification. We evaluated our HiM with four linguistic and medical datasets for mixed-hop prediction and multi-hop inference tasks. Experimental results demonstrated that: 1) Both HiM models effectively capture hierarchical relationships for four ontological datasets, surpassing Euclidean baselines. 2) HiM-Poincare captures fine-grained semantic distinctions with higher h-norms, while HiM-Lorentz provides more stable, compact, and hierarchy-preserving embeddings favoring robustness over detail.