Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three key limitations in in-context learning for tabular data—single-scale modeling, quadratic complexity of dense attention, and constraints imposed by serial architectures—this paper introduces MultiTab, a novel neural architecture enabling multi-scale feature interaction and efficient computation. Its core contributions are: (1) hierarchical multi-scale encoding that explicitly captures dependencies at field-, row-, and table-level granularities; (2) hybrid sparse attention (combining windowed, global, and random patterns), reducing width-wise computational complexity to linear; and (3) a Perceiver-inspired differentiable memory module that facilitates bidirectional information flow across components and enables iterative representation refinement. MultiTab achieves significant improvements over TabPFN and TabICL across multiple benchmarks, particularly excelling in high-dimensional settings with superior scalability and long-range dependency modeling. It establishes a scalable, hierarchical paradigm for tabular in-context learning.

Technology Category

Application Category

📝 Abstract
Tabular data remain the predominant format for real-world applications. Yet, developing effective neural models for tabular data remains challenging due to heterogeneous feature types and complex interactions occurring at multiple scales. Recent advances in tabular in-context learning (ICL), such as TabPFN and TabICL, have achieved state-of-the-art performance comparable to gradient-boosted trees (GBTs) without task-specific fine-tuning. However, current architectures exhibit key limitations: (1) single-scale feature processing that overlooks hierarchical dependencies, (2) dense attention with quadratic scaling in table width, and (3) strictly sequential component processing that prevents iterative representation refinement and cross-component communication. To address these challenges, we introduce Orion-MSP, a tabular ICL architecture featuring three key innovations: (1) multi-scale processing to capture hierarchical feature interactions; (2) block-sparse attention combining windowed, global, and random patterns for scalable efficiency and long-range connectivity; and (3) a Perceiver-style memory enabling safe bidirectional information flow across components. Across diverse benchmarks, Orion-MSP matches or surpasses state-of-the-art performance while scaling effectively to high-dimensional tables, establishing a new standard for efficient tabular in-context learning. The model is publicly available at https://github.com/Lexsi-Labs/Orion-MSP .
Problem

Research questions and friction points this paper is trying to address.

Addressing single-scale feature processing limitations in tabular data
Overcoming quadratic scaling inefficiency in dense attention mechanisms
Enabling bidirectional communication across sequential model components
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-scale processing captures hierarchical feature interactions
Block-sparse attention combines windowed global random patterns
Perceiver-style memory enables bidirectional cross-component information flow
🔎 Similar Papers
No similar papers found.