Serverless GPU Architecture for Enterprise HR Analytics: A Production-Scale BDaaS Implementation

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Distributed frameworks (e.g., Spark/Flink) face challenges—including complex coordination, poor auditability, and high operational cost—in regulated environments requiring FIPS/IL4 compliance for medium-scale, low-latency inference on structured data. Method: We propose the first FIPS/IL4-compliant serverless GPU analytics architecture, integrating a single-node serverless GPU runtime with an interpretable TabNet model. We introduce a feature-masking explanation mechanism to ensure auditable, traceable decision logic and package the system via Helm for seamless integration into Big Data as a Service platforms. Contribution/Results: Our architecture enables real-time inference on HR, Adult, and BLS datasets. Relative to Spark baselines, it achieves 4.5× higher throughput, reduces p99 latency to 22 ms (a 98× improvement), and cuts per-thousand-inference cost by 90%. Compliance overhead adds only 5.7 ms latency, and explanation stability is robust across inputs.

Technology Category

Application Category

📝 Abstract
Industrial and government organizations increasingly depend on data-driven analytics for workforce, finance, and regulated decision processes, where timeliness, cost efficiency, and compliance are critical. Distributed frameworks such as Spark and Flink remain effective for massive-scale batch or streaming analytics but introduce coordination complexity and auditing overheads that misalign with moderate-scale, latency-sensitive inference. Meanwhile, cloud providers now offer serverless GPUs, and models such as TabNet enable interpretable tabular ML, motivating new deployment blueprints for regulated environments. In this paper, we present a production-oriented Big Data as a Service (BDaaS) blueprint that integrates a single-node serverless GPU runtime with TabNet. The design leverages GPU acceleration for throughput, serverless elasticity for cost reduction, and feature-mask interpretability for IL4/FIPS compliance. We conduct benchmarks on the HR, Adult, and BLS datasets, comparing our approach against Spark and CPU baselines. Our results show that GPU pipelines achieve up to 4.5x higher throughput, 98x lower latency, and 90% lower cost per 1K inferences compared to Spark baselines, while compliance mechanisms add only ~5.7 ms latency with p99 < 22 ms. Interpretability remains stable under peak load, ensuring reliable auditability. Taken together, these findings provide a compliance-aware benchmark, a reproducible Helm-packaged blueprint, and a decision framework that demonstrate the practicality of secure, interpretable, and cost-efficient serverless GPU analytics for regulated enterprise and government settings.
Problem

Research questions and friction points this paper is trying to address.

Optimizing latency-sensitive inference for moderate-scale enterprise analytics
Reducing coordination complexity in regulated data processing environments
Integrating serverless GPU acceleration with compliance requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Serverless GPU runtime for cost-efficient elasticity
TabNet integration for interpretable tabular ML
Feature-mask interpretability enabling compliance requirements
🔎 Similar Papers
No similar papers found.
G
Guilin Zhang
Department of Engineering Management and Systems Engineering, George Washington University, USA
W
Wulan Guo
Department of Engineering Management and Systems Engineering, George Washington University, USA
Z
Ziqi Tan
Department of Engineering Management and Systems Engineering, George Washington University, USA
S
Srinivas Vippagunta
Workday, Inc.
S
Suchitra Raman
Workday, Inc.
S
Shreeshankar Chatterjee
Workday, Inc.
Ju Lin
Ju Lin
Research Scientist at Meta
machine learningspeech recognitionspeech enhancement
S
Shang Liu
Workday, Inc.
M
Mary Schladenhauffen
Workday, Inc.
J
Jeffrey Luo
Workday, Inc.
Hailong Jiang
Hailong Jiang
Computer Science, Youngstown State University
Fault tolerantHPC systemCompilerCode Intelligence