Astra: AI Safety, Trust, & Risk Assessment

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the gap in existing global AI safety frameworks, which overlook India-specific sociotechnical challenges such as caste-based discrimination, linguistic exclusion, and limited rural connectivity. To bridge this gap, the authors propose ASTRA—an AI safety risk database contextualized for India—developed through a bottom-up inductive approach. ASTRA features a domain-agnostic ontology comprising 37 fine-grained risk categories and introduces a novel tripartite causal risk taxonomy that distinguishes societal, frontier, and structural risks. Through ontology modeling, causal attribution analysis, and a scalable architecture, the framework has been empirically validated in the domains of education and financial lending. ASTRA serves as an evolving “living” regulatory instrument tailored to India’s AI ecosystem, enabling technology-driven, localized risk mitigation strategies.

Technology Category

Application Category

📝 Abstract
This paper argues that existing global AI safety frameworks exhibit contextual blindness towards India's unique socio-technical landscape. With a population of 1.5 billion and a massive informal economy, India's AI integration faces specific challenges such as caste-based discrimination, linguistic exclusion of vernacular speakers, and infrastructure failures in low-connectivity rural zones, that are frequently overlooked by Western, market-centric narratives. We introduce ASTRA, an empirically grounded AI Safety Risk Database designed to categorize risks through a bottom-up, inductive process. Unlike general taxonomies, ASTRA defines AI Safety Risks specifically as hazards stemming from design flaws such as skewed training sets or lack of guardrails that can be mitigated through technical iteration or architectural changes. This framework employs a tripartite causal taxonomy to evaluate risks based on their implementation timing (development, deployment, or usage), the responsible entity (the system or the user), and the nature of the intent (unintentional vs. intentional). Central to the research is a domain-agnostic ontology that organizes 37 leaf-level risk classes into two primary meta-categories: Social Risks and Frontier/Socio-Structural Risks. By focusing initial efforts on the Education and Financial Lending sectors, the paper establishes a scalable foundation for a "living" regulatory utility intended to evolve alongside India's expanding AI ecosystem.
Problem

Research questions and friction points this paper is trying to address.

AI safety
contextual blindness
socio-technical landscape
India
algorithmic discrimination
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI Safety
Risk Ontology
Contextual AI Governance
Inductive Risk Taxonomy
Socio-Technical Framework
🔎 Similar Papers
No similar papers found.