Cross-Platform Evaluation of Reasoning Capabilities in Foundation Models

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the hardware platform dependency that hinders fair evaluation of foundation models’ reasoning capabilities, introducing the first infrastructure-agnostic, cross-platform benchmark. We systematically evaluate 15 state-of-the-art models on 79 reasoning problems spanning eight academic domains—including physics and mathematics—across three heterogeneous environments: a high-performance computing system (MareNostrum 5), a cloud platform (Nebius AI Studio), and an academic cluster (8×H200). Employing a multi-stage experimental design—baseline establishment, platform validation, and extended assessment—we find that training data quality exerts a significantly stronger influence on reasoning performance than parameter count, challenging the “bigger is better” assumption. The benchmark enables reproducible, cross-platform evaluation and longitudinal model tracking, providing empirically grounded guidance for model selection in education, research, and industry applications.

Technology Category

Application Category

📝 Abstract
This paper presents a comprehensive cross-platform evaluation of reasoning capabilities in contemporary foundation models, establishing an infrastructure-agnostic benchmark across three computational paradigms: HPC supercomputing (MareNostrum 5), cloud platforms (Nebius AI Studio), and university clusters (a node with eight H200 GPUs). We evaluate 15 foundation models across 79 problems spanning eight academic domains (Physics, Mathematics, Chemistry, Economics, Biology, Statistics, Calculus, and Optimization) through three experimental phases: (1) Baseline establishment: Six models (Mixtral-8x7B, Phi-3, LLaMA 3.1-8B, Gemma-2-9b, Mistral-7B, OLMo-7B) evaluated on 19 problems using MareNostrum 5, establishing methodology and reference performance; (2) Infrastructure validation: The 19-problem benchmark repeated on university cluster (seven models including Falcon-Mamba state-space architecture) and Nebius AI Studio (nine state-of-the-art models: Hermes-4 70B/405B, LLaMA 3.1-405B/3.3-70B, Qwen3 30B/235B, DeepSeek-R1, GPT-OSS 20B/120B) to confirm infrastructure-agnostic reproducibility; (3) Extended evaluation: Full 79-problem assessment on both university cluster and Nebius platforms, probing generalization at scale across architectural diversity. The findings challenge conventional scaling assumptions, establish training data quality as more critical than model size, and provide actionable guidelines for model selection across educational, production, and research contexts. The tri-infrastructure methodology and 79-problem benchmark enable longitudinal tracking of reasoning capabilities as foundation models evolve.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reasoning capabilities across diverse computational infrastructures
Assessing 15 foundation models across 79 multi-domain academic problems
Establishing training data quality as more critical than model size
Innovation

Methods, ideas, or system contributions that make the work stand out.

Established infrastructure-agnostic benchmark across three platforms
Evaluated 15 models across 79 problems in eight domains
Used three-phase methodology to validate reproducibility and generalization
🔎 Similar Papers
No similar papers found.
J
J. de Curtò
BARCELONA Supercomputing Center (BSC), Barcelona, Spain
I
I. de Zarzà
LUXEMBOURG Institute of Science and Technology (LIST), Esch-sur-Alzette, Luxembourg
P
Pablo García
Universidad Pontificia Comillas, Madrid, Spain
Jordi Cabot
Jordi Cabot
Head of the Software Engineering RDI Unit at Luxembourg Institute of Science and Technology (LIST)
software engineeringmodelingopen sourcelow-codeAI