Learning Neural Antiderivatives

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of continuous-domain higher-order cumulative integration capability in neural fields. We propose learning neural representations directly from functions to approximate arbitrary-order antiderivatives, enabling grid-free continuous integration. To this end, we design a novel neural architecture that embeds classical cumulative operators into continuous neural systems, supporting multidimensional inputs and learnable, adjustable integration orders, while unifying differential and integral operators within a single framework. To our knowledge, this is the first approach to achieve learnable, differentiable, and generalizable higher-order antiderivative representations in neural fields. Experiments demonstrate significant improvements in accuracy and generalization across neural radiance field reconstruction, antialiasing filtering, and volume rendering. Our method provides continuous neural representations with a core functionality analogous to discrete integral tables—enabling robust, parameter-efficient, and mathematically grounded integration over continuous domains.

Technology Category

Application Category

📝 Abstract
Neural fields offer continuous, learnable representations that extend beyond traditional discrete formats in visual computing. We study the problem of learning neural representations of repeated antiderivatives directly from a function, a continuous analogue of summed-area tables. Although widely used in discrete domains, such cumulative schemes rely on grids, which prevents their applicability in continuous neural contexts. We introduce and analyze a range of neural methods for repeated integration, including both adaptations of prior work and novel designs. Our evaluation spans multiple input dimensionalities and integration orders, assessing both reconstruction quality and performance in downstream tasks such as filtering and rendering. These results enable integrating classical cumulative operators into modern neural systems and offer insights into learning tasks involving differential and integral operators.
Problem

Research questions and friction points this paper is trying to address.

Learning neural representations of repeated antiderivatives from functions
Enabling cumulative integration schemes for continuous neural fields
Integrating classical cumulative operators into modern neural systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning neural representations of repeated antiderivatives
Introducing novel neural methods for repeated integration
Enabling integration of classical cumulative operators into neural systems
🔎 Similar Papers
No similar papers found.
F
Fizza Rubab
Max-Planck-Institut für Informatik
N
Ntumba Elie Nsampi
Max-Planck-Institut für Informatik
M
Martin Balint
Max-Planck-Institut für Informatik
F
Felix Mujkanovic
Max-Planck-Institut für Informatik
Hans-Peter Seidel
Hans-Peter Seidel
Professor of Computer Science, Max Planck Institute for Informatics, Saarland Informatics Campus
Computer GraphicsGeometric ModelingGeometry ProcessingRenderingVideo Processing
Tobias Ritschel
Tobias Ritschel
Professor, University College London
Computer Graphics
T
Thomas Leimkühler
Max-Planck-Institut für Informatik