Logging Requirement for Continuous Auditing of Responsible Machine Learning-based Applications

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning systems lack auditability concerning transparency, fairness, and accountability. Method: This paper introduces a novel approach that systematically embeds responsible AI metrics—such as bias, explainability, and decision provenance—into logging infrastructure. Unlike conventional operational logs, the proposed framework integrates software engineering logging practices with AI ethics assessment dimensions, yielding a structured log model enabling continuous monitoring, traceable verification, and dynamic compliance checking. Contribution/Results: It represents the first method to achieve deep synergy between AI governance metrics and logging infrastructure, bridging the audit gap between model behavior and ethical compliance. Empirical evaluation demonstrates significant improvements in verifiability during regulatory audits and stakeholder trust. The approach provides actionable, implementation-ready guidance for developers and toolchain designers to enhance algorithmic accountability.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) is increasingly applied across industries to automate decision-making, but concerns about ethical and legal compliance remain due to limited transparency, fairness, and accountability. Monitoring through logging a long-standing practice in traditional software offers a potential means for auditing ML applications, as logs provide traceable records of system behavior useful for debugging, performance analysis, and continuous auditing. systematically auditing models for compliance or accountability. The findings underscore the need for enhanced logging practices and tooling that systematically integrate responsible AI metrics. Such practices would support the development of auditable, transparent, and ethically responsible ML systems, aligning with growing regulatory requirements and societal expectations. By highlighting specific deficiencies and opportunities, this work provides actionable guidance for both practitioners and tool developers seeking to strengthen the accountability and trustworthiness of ML applications.
Problem

Research questions and friction points this paper is trying to address.

Addressing transparency and accountability in ML decision-making systems
Developing logging practices for continuous auditing of responsible AI
Enhancing compliance with ethical and legal requirements in ML
Innovation

Methods, ideas, or system contributions that make the work stand out.

Logging for continuous auditing of ML applications
Systematic integration of responsible AI metrics
Enhanced logging practices for auditable ML systems
🔎 Similar Papers
No similar papers found.
P
Patrick Loic Foalem
Department of Computer Engineering and Software Engineering, Polytechnique Montreal, Montreal, QC, Canada
Leuson Da Silva
Leuson Da Silva
Postdoctoral Fellow - Polytechnique Montreal
Software EngineeringGenerative AIEmpirical StudiesCode Integration
Foutse Khomh
Foutse Khomh
NSERC Arthur B. McDonald Fellow, CRC Tier 1, Canada CIFAR AI Chair, FRQ-IVADO Chair, Full Professor
Software engineeringMachine learning systems engineeringMining software repositoriesReverse
H
Heng Li
Department of Computer Engineering and Software Engineering, Polytechnique Montreal, Montreal, QC, Canada
E
Ettore Merlo
Department of Computer Engineering and Software Engineering, Polytechnique Montreal, Montreal, QC, Canada