PASTA: A Scalable Framework for Multi-Policy AI Compliance Evaluation

πŸ“… 2026-01-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the growing challenge faced by resource-constrained AI practitioners in navigating increasingly complex, multi-policy compliance requirements. Existing approaches are often costly and limited to single-policy assessments. To overcome these limitations, the authors propose a scalable AI compliance evaluation framework that integrates a unified model card spanning the entire development lifecycle, standardized policy text representations, a low-cost large language model (LLM)-based pairwise evaluation engine, and an interpretable heatmap visualization interface. The framework enables parallel analysis across five major AI policy regimes. Expert evaluations demonstrate high alignment with human judgments (ρ β‰₯ 0.626), with each assessment requiring approximately two minutes and costing around $3. A user study (N=12) further confirms the interpretability and actionability of the system’s outputs.

Technology Category

Application Category

πŸ“ Abstract
AI compliance is becoming increasingly critical as AI systems grow more powerful and pervasive. Yet the rapid expansion of AI policies creates substantial burdens for resource-constrained practitioners lacking policy expertise. Existing approaches typically address one policy at a time, making multi-policy compliance costly. We present PASTA, a scalable compliance tool integrating four innovations: (1) a comprehensive model-card format supporting descriptive inputs across development stages; (2) a policy normalization scheme; (3) an efficient LLM-powered pairwise evaluation engine with cost-saving strategies; and (4) an interface delivering interpretable evaluations via compliance heatmaps and actionable recommendations. Expert evaluation shows PASTA's judgments closely align with human experts ($\rho \geq .626$). The system evaluates five major policies in under two minutes at approximately \$3. A user study (N = 12) confirms practitioners found outputs easy-to-understand and actionable, introducing a novel framework for scalable automated AI governance.
Problem

Research questions and friction points this paper is trying to address.

AI compliance
multi-policy evaluation
scalable governance
policy burden
automated compliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

scalable AI compliance
policy normalization
LLM-powered evaluation
model card
automated AI governance
πŸ”Ž Similar Papers
No similar papers found.