🤖 AI Summary
SMEs face critical challenges in adopting AI and large language models (LLMs), including poor technical fit, algorithmic opacity, absence of ethical guidelines, and low user trust. This study proposes the first four-pillar trustworthy AI governance framework specifically designed for SME contexts, integrating data governance, algorithmic transparency, human-in-the-loop oversight, and inherently interpretable model architectures to operationalize ethical principles across the AI lifecycle. Its key innovation lies in systematically designing a lightweight, resource-aware governance pathway tailored to SME constraints—uniquely combining risk mitigation with clear accountability mechanisms. Empirical evaluation demonstrates that the framework significantly enhances AI system safety, auditability, and organizational resilience. It thus establishes a feasible, ethically compliant AI implementation paradigm for SMEs, bridging the gap between theoretical AI ethics and practical deployment in resource-constrained settings.
📝 Abstract
Artificial Intelligence (AI) and Large Language Models (LLMs) are reshaping today's business practices, however, their adoption within small and medium-sized enterprises (SMEs) raises significant technical, ethical and trust issues. This paper proposes a structured, multi-phased framework designed to embed trust and ethical principles throughout the AI lifecycle for their secure and responsible use in SMEs. Structured around four pillars, i.e., Data, Algorithms, Human oversight, and Model Architecture, the framework bridges theoretical ethical principles with operational practice, enhancing AI capabilities in diverse SME applications. Ultimately, this paper offers a structured roadmap for responsible AI adoption, framing trust and ethics as a catalyst for resilience, competitiveness, and sustainable innovation in SMEs.