COMMET: orders-of-magnitude speed-up in finite element method via batch-vectorized neural constitutive updates

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural constitutive models (NCMs) face prohibitive computational overhead in large-scale finite element (FE) simulations due to repeated analytical differentiation of stress and stiffness—hindering practical deployment. To address this, we introduce COMMET, an open-source FE framework designed specifically for efficient NCM integration. Its core innovations are: (i) a rearchitected FE solver enabling batched, vectorized constitutive updates; (ii) computation graph optimization that replaces conventional automatic differentiation with explicit analytical derivative derivation; and (iii) MPI-based distributed-memory parallelization. Experimental results demonstrate that COMMET achieves over 1000× speedup versus non-vectorized automatic-differentiation implementations while preserving numerical accuracy. It is the first framework to enable high-fidelity FE simulation with NCMs at million-degree-of-freedom scale, establishing a new paradigm for large-scale, high-fidelity mechanical modeling of complex materials.

Technology Category

Application Category

📝 Abstract
Constitutive evaluations often dominate the computational cost of finite element (FE) simulations whenever material models are complex. Neural constitutive models (NCMs) offer a highly expressive and flexible framework for modeling complex material behavior in solid mechanics. However, their practical adoption in large-scale FE simulations remains limited due to significant computational costs, especially in repeatedly evaluating stress and stiffness. NCMs thus represent an extreme case: their large computational graphs make stress and stiffness evaluations prohibitively expensive, restricting their use to small-scale problems. In this work, we introduce COMMET, an open-source FE framework whose architecture has been redesigned from the ground up to accelerate high-cost constitutive updates. Our framework features a novel assembly algorithm that supports batched and vectorized constitutive evaluations, compute-graph-optimized derivatives that replace automatic differentiation, and distributed-memory parallelism via MPI. These advances dramatically reduce runtime, with speed-ups exceeding three orders of magnitude relative to traditional non-vectorized automatic differentiation-based implementations. While we demonstrate these gains primarily for NCMs, the same principles apply broadly wherever for-loop based assembly or constitutive updates limit performance, establishing a new standard for large-scale, high-fidelity simulations in computational mechanics.
Problem

Research questions and friction points this paper is trying to address.

Accelerating expensive neural constitutive model evaluations in finite element simulations
Overcoming computational bottlenecks in large-scale finite element analysis
Enabling high-fidelity simulations by vectorizing constitutive updates and derivatives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Batched vectorized constitutive evaluations algorithm
Compute-graph-optimized derivatives replace automatic differentiation
Distributed-memory parallelism via MPI implementation
🔎 Similar Papers
No similar papers found.