Vision-based Deep Learning Analysis of Unordered Biomedical Tabular Datasets via Optimal Spatial Cartography

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Biomedical tabular data lack inherent spatial structure, limiting their effective utilization by vision-based models. To address this challenge, this work proposes Dynomap, a novel framework that, for the first time, jointly optimizes task-driven feature space topology and prediction objectives directly from unordered tabular data—without relying on prior knowledge or heuristic rules—through an end-to-end differentiable rendering mechanism. Dynomap dynamically constructs structured feature maps compatible with convolutional neural networks (CNNs) and other visual models. Evaluated across multiple biomedical datasets, including liquid biopsy and Parkinson’s speech analysis, Dynomap substantially outperforms existing methods, achieving up to an 18% improvement in cancer subtype classification accuracy while producing interpretable spatial feature patterns.

Technology Category

Application Category

📝 Abstract
Tabular data are central to biomedical research, from liquid biopsy and bulk and single-cell transcriptomics to electronic health records and phenotypic profiling. Unlike images or sequences, however, tabular datasets lack intrinsic spatial organization: features are treated as unordered dimensions, and their relationships must be inferred implicitly by the model. This limits the ability of vision architectures to exploit local structure and higher-order feature interactions in non-spatial biomedical data. Here we introduce Dynamic Feature Mapping (Dynomap), an end-to-end deep learning framework that learns a task-optimized spatial topology of features directly from data. Dynomap jointly optimizes feature placement and prediction through a fully differentiable rendering mechanism, without relying on heuristics, predefined groupings, or external priors. By transforming high-dimensional tabular vectors into learned feature maps, Dynomap enables vision-based models to operate effectively on unordered biomedical inputs. Across multiple clinical and biological datasets, Dynomap consistently outperformed classical machine learning, modern deep tabular models, and existing vector-to-image approaches. In liquid biopsy data, Dynomap organized clinically relevant gene signatures into coherent spatial patterns and improved multiclass cancer subtype prediction accuracy by up to 18%. In a Parkinson disease voice dataset, it clustered disease-associated acoustic descriptors and improved accuracy by up to 8%. Similar gains and interpretable feature organization were observed in additional biomedical datasets. These results establish Dynomap as a general strategy for bridging tabular and vision-based deep learning and for uncovering structured, clinically relevant patterns in high-dimensional biomedical data.
Problem

Research questions and friction points this paper is trying to address.

tabular data
spatial organization
vision-based deep learning
biomedical data
feature interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Feature Mapping
spatial cartography
tabular data
vision-based deep learning
differentiable rendering
🔎 Similar Papers
No similar papers found.