🤖 AI Summary
Modeling dynamic context-aware relationships in high-dimensional heterogeneous data remains challenging for conventional methods (e.g., PLS) due to complex nonlinearities, multi-scale interactions, and cross-group feature dependencies. To address this, we propose a self-calibrating, adaptive latent attention representation learning framework: first, group-wise projections construct an interpretable latent structure; second, a sample-adaptive kernel attention mechanism enables dynamic, weighted cross-group feature interaction modeling—jointly optimizing local pattern capture and global dependency integration while supporting multi-scale information fusion. Unlike static weighting or single-scale approaches, our method significantly enhances adaptability to contextual shifts. Extensive experiments on multiple real-world high-dimensional heterogeneous datasets demonstrate consistent improvements over state-of-the-art baselines, with average gains of 12.6%–23.4% in predictive performance (e.g., RMSE, R²), validating both effectiveness and generalizability.
📝 Abstract
High-dimensional, heterogeneous data with complex feature interactions pose significant challenges for traditional predictive modeling approaches. While Projection to Latent Structures (PLS) remains a popular technique, it struggles to model complex non-linear relationships, especially in multivariate systems with high-dimensional correlation structures. This challenge is further compounded by simultaneous interactions across multiple scales, where local processing fails to capture crossgroup dependencies. Additionally, static feature weighting limits adaptability to contextual variations, as it ignores sample-specific relevance. To address these limitations, we propose a novel method that enhances predictive performance through novel architectural innovations. Our architecture introduces an adaptive kernel-based attention mechanism that processes distinct feature groups separately before integration, enabling capture of local patterns while preserving global relationships. Experimental results show substantial improvements in performance metrics, compared to the state-of-the-art methods across diverse datasets.