🤖 AI Summary
Distributed frameworks (e.g., Spark/Flink) face challenges—including complex coordination, poor auditability, and high operational cost—in regulated environments requiring FIPS/IL4 compliance for medium-scale, low-latency inference on structured data.
Method: We propose the first FIPS/IL4-compliant serverless GPU analytics architecture, integrating a single-node serverless GPU runtime with an interpretable TabNet model. We introduce a feature-masking explanation mechanism to ensure auditable, traceable decision logic and package the system via Helm for seamless integration into Big Data as a Service platforms.
Contribution/Results: Our architecture enables real-time inference on HR, Adult, and BLS datasets. Relative to Spark baselines, it achieves 4.5× higher throughput, reduces p99 latency to 22 ms (a 98× improvement), and cuts per-thousand-inference cost by 90%. Compliance overhead adds only 5.7 ms latency, and explanation stability is robust across inputs.
📝 Abstract
Industrial and government organizations increasingly depend on data-driven analytics for workforce, finance, and regulated decision processes, where timeliness, cost efficiency, and compliance are critical. Distributed frameworks such as Spark and Flink remain effective for massive-scale batch or streaming analytics but introduce coordination complexity and auditing overheads that misalign with moderate-scale, latency-sensitive inference. Meanwhile, cloud providers now offer serverless GPUs, and models such as TabNet enable interpretable tabular ML, motivating new deployment blueprints for regulated environments. In this paper, we present a production-oriented Big Data as a Service (BDaaS) blueprint that integrates a single-node serverless GPU runtime with TabNet. The design leverages GPU acceleration for throughput, serverless elasticity for cost reduction, and feature-mask interpretability for IL4/FIPS compliance. We conduct benchmarks on the HR, Adult, and BLS datasets, comparing our approach against Spark and CPU baselines. Our results show that GPU pipelines achieve up to 4.5x higher throughput, 98x lower latency, and 90% lower cost per 1K inferences compared to Spark baselines, while compliance mechanisms add only ~5.7 ms latency with p99 < 22 ms. Interpretability remains stable under peak load, ensuring reliable auditability. Taken together, these findings provide a compliance-aware benchmark, a reproducible Helm-packaged blueprint, and a decision framework that demonstrate the practicality of secure, interpretable, and cost-efficient serverless GPU analytics for regulated enterprise and government settings.