🤖 AI Summary
This study addresses the growing practice of “AI washing”—where firms exaggerate or falsely claim artificial intelligence capabilities to secure short-term gains—at the expense of digital credibility and societal trust. Drawing on sociotechnical perspectives and integrating theories from information systems, ethics, and innovation management, while leveraging insights from greenwashing and signaling theory, this work offers the first systematic conceptualization of AI washing. It proposes a novel framework delineating four distinct types of AI washing practices: marketing-branding, inflated technical capability, strategic signaling, and governance washing. The research elucidates the multilevel adverse consequences of such practices for organizations, industries, and society, highlighting how short-term benefits are often offset by reputational damage, erosion of trust, and misallocation of resources. Finally, it outlines critical directions for future research to advance trustworthy AI development.
📝 Abstract
The rapid evolution of artificial intelligence (AI) systems, tools, and technologies has opened up novel, unprecedented opportunities for businesses to innovate, differentiate, and compete. However, growing concerns have emerged about the use of AI in businesses, particularly AI washing, in which firms exaggerate, misrepresent, or superficially signal their AI capabilities to gain financial and reputational advantages. This paper aims to establish a conceptual foundation for understanding AI washing. In this paper, we draw on analogies from greenwashing and insights from Information Systems (IS) research on ethics, trust, signaling, and digital innovation. This paper proposes a typology of AI washing practices across four primary domains: marketing and branding, technical capability inflation, strategic signaling, and governance-based washing. In addition, we examine their organizational, industry, and societal impacts. Our investigation and analysis reveal how AI washing can lead to short-term gains; however, it also proposes severe long-term consequences, including reputational damage, erosion of trust, and misallocation of resources. Moreover, this paper examines current research directions and open questions aimed at mitigating AI washing practices and enhancing the trust and reliability of legitimate AI systems and technologies.