🤖 AI Summary
Addressing the debate over whether advanced AI systems—such as large language models—possess phenomenal consciousness, this paper proposes the first physically substrate-independent and empirically verifiable sufficiency criterion for phenomenal consciousness. Methodologically, it integrates formal modeling, logical analysis, and cognitive architecture design, treating humans as the canonical instantiation of the criterion and reconstructing information processing through a dual philosophical–computational interpretation. The core contributions are: (1) a robust, anti-fooling, and operationally tractable set of principles for conscious implementation; (2) the first rigorous formal proof within a unified logical framework that the criterion holds for human cognition; and (3) a theoretical foundation and concrete design pathway for engineering artificial systems with phenomenal consciousness—thereby enabling a paradigm shift toward unifying consciousness research across AI, cognitive science, and philosophy. (149 words)
📝 Abstract
Determining whether another system, biological or artificial, possesses phenomenal consciousness has long been a central challenge in consciousness studies. This attribution problem has become especially pressing with the rise of large language models and other advanced AI systems, where debates about "AI consciousness" implicitly rely on some criterion for deciding whether a given system is conscious. In this paper, we propose a substrate-independent, logically rigorous, and counterfeit-resistant sufficiency criterion for phenomenal consciousness. We argue that any machine satisfying this criterion should be regarded as conscious with at least the same level of confidence with which we attribute consciousness to other humans. Building on this criterion, we develop a formal framework and specify a set of operational principles that guide the design of systems capable of meeting the sufficiency condition. We further argue that machines engineered according to this framework can, in principle, realize phenomenal consciousness. As an initial validation, we show that humans themselves can be viewed as machines that satisfy this framework and its principles. If correct, this proposal carries significant implications for philosophy, cognitive science, and artificial intelligence. It offers an explanation for why certain qualia, such as the experience of red, are in principle irreducible to physical description, while simultaneously providing a general reinterpretation of human information processing. Moreover, it suggests a path toward a new paradigm of AI beyond current statistics-based approaches, potentially guiding the construction of genuinely human-like AI.