Joint Surrogate Learning of Objectives, Constraints, and Sensitivities for Efficient Multi-objective Optimization of Neural Dynamical Systems

📅 2026-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of optimizing high-dimensional, heavily constrained neural dynamical systems—specifically, the binary partitioning of feasible regions and the absence of gradient signals—by introducing the DMOSOPT framework. DMOSOPT uniquely integrates the objective function, constraints, and parameter sensitivities into a unified surrogate model and constructs a differentiable feasibility boundary via smooth approximation, enabling gradient-driven simultaneous optimization. The approach substantially reduces the number of simulation evaluations required across scales, from single-cell to population-level network models, and efficiently solves highly constrained neural dynamical system optimization problems at supercomputing scale, thereby supporting multiscale scientific computing.

Technology Category

Application Category

📝 Abstract
Biophysical neural system simulations are among the most computationally demanding scientific applications, and their optimization requires navigating high-dimensional parameter spaces under numerous constraints that impose a binary feasible/infeasible partition with no gradient signal to guide the search. Here, we introduce DMOSOPT, a scalable optimization framework that leverages a unified, jointly learned surrogate model to capture the interplay between objectives, constraints, and parameter sensitivities. By learning a smooth approximation of both the objective landscape and the feasibility boundary, the joint surrogate provides a unified gradient that simultaneously steers the search toward improved objective values and greater constraint satisfaction, while its partial derivatives yield per-parameter sensitivity estimates that enable more targeted exploration. We validate the framework from single-cell dynamics to population-level network activity, spanning incremental stages of a neural circuit modeling workflow, and demonstrate efficient, effective optimization of highly constrained problems at supercomputing scale with substantially fewer problem evaluations. While motivated by and demonstrated in the context of computational neuroscience, the framework is general and applicable to constrained multi-objective optimization problems across scientific and engineering domains.
Problem

Research questions and friction points this paper is trying to address.

multi-objective optimization
constrained optimization
neural dynamical systems
high-dimensional parameter spaces
feasibility boundary
Innovation

Methods, ideas, or system contributions that make the work stand out.

joint surrogate learning
multi-objective optimization
constraint handling
parameter sensitivity
neural dynamical systems
🔎 Similar Papers
No similar papers found.
F
Frithjof Gressmann
Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign, Urbana, IL; The Grainger College of Engineering, University of Illinois Urbana-Champaign, Urbana, IL
I
Ivan Georgiev Raikov
Department of Neurosurgery, Stanford University, Stanford, CA
S
Seung Hyun Kim
Carl R. Woese Institute for Genomic Biology, University of Illinois Urbana-Champaign, Urbana, IL; Mechanical Science and Engineering, University of Illinois Urbana-Champaign, Urbana, IL; National Center for Supercomputing Applications, University of Illinois Urbana-Champaign, Urbana, IL; The Grainger College of Engineering, University of Illinois Urbana-Champaign, Urbana, IL
Mattia Gazzola
Mattia Gazzola
University of Illinois Urbana-Champaign
biolocomotionnumericsbiohybrid systemsfluidicsbio-mechanics
Lawrence Rauchwerger
Lawrence Rauchwerger
Professor of Computer Science, University of Illinois at Urbana-Champaign
parallel computingcompilersparallel computers
Ivan Soltesz
Ivan Soltesz
Professor, Stanford University
neuroscience