A Tensor-Based Compiler and a Runtime for Neuron-Level DNN Certifier Specifications

📅 2025-07-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing DNN abstract interpretation-based verifiers suffer from a semantic gap between design (neuron-level abstractions) and implementation (tensor-level operations), resulting in high development overhead and poor scalability. To bridge this gap, we propose the first compiler framework tailored for verification: it introduces a stack-based intermediate representation and shape-aware analysis to automatically lift neuron-level semantics to tensor operations; designs the g-BCSR dual sparse format, specifically optimized for inherent sparsity patterns in DNNs; and integrates domain-specific optimization rewrite rules with runtime shape inference to enable efficient, formally correct layer-wise computation. The framework significantly lowers the barrier to developing new verifiers—achieving performance on par with hand-optimized implementations—while supporting flexible, scalable verification across diverse DNN architectures.

Technology Category

Application Category

📝 Abstract
The uninterpretability of DNNs has led to the adoption of abstract interpretation-based certification as a practical means to establish trust in real-world systems that rely on DNNs. However, the current landscape supports only a limited set of certifiers, and developing new ones or modifying existing ones for different applications remains difficult. This is because the mathematical design of certifiers is expressed at the neuron level, while their implementations are optimized and executed at the tensor level. This mismatch creates a semantic gap between design and implementation, making manual bridging both complex and expertise-intensive -- requiring deep knowledge in formal methods, high-performance computing, etc. We propose a compiler framework that automatically translates neuron-level specifications of DNN certifiers into tensor-based, layer-level implementations. This is enabled by two key innovations: a novel stack-based intermediate representation (IR) and a shape analysis that infers the implicit tensor operations needed to simulate the neuron-level semantics. During lifting, the shape analysis creates tensors in the minimal shape required to perform the corresponding operations. The IR also enables domain-specific optimizations as rewrites. At runtime, the resulting tensor computations exhibit sparsity tied to the DNN architecture. This sparsity does not align well with existing formats. To address this, we introduce g-BCSR, a double-compression format that represents tensors as collections of blocks of varying sizes, each possibly internally sparse. Using our compiler and g-BCSR, we make it easy to develop new certifiers and analyze their utility across diverse DNNs. Despite its flexibility, the compiler achieves performance comparable to hand-optimized implementations.
Problem

Research questions and friction points this paper is trying to address.

Bridging neuron-level DNN certifier design to tensor-level implementation
Automating translation of certifier specs to optimized tensor code
Enabling efficient sparse tensor computation for DNN certification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compiler translates neuron-level to tensor-level
Stack-based IR and shape analysis enable lifting
g-BCSR format handles sparse tensor computations
🔎 Similar Papers
No similar papers found.
Avaljot Singh
Avaljot Singh
University of Illinois Urbana Champaign
Computer Science
Y
Yamin Chandini Sarita
University of Illinois Urbana-Champaign, USA
A
Aditya Mishra
University of Illinois Urbana-Champaign, USA
I
Ishaan Goyal
University of Illinois Urbana-Champaign, USA
G
Gagandeep Singh
University of Illinois Urbana-Champaign, USA
Charith Mendis
Charith Mendis
University of Illinois at Urbana-Champaign
CompilersMachine LearningProgram AnalysisVerification