🤖 AI Summary
This work investigates whether humans and generative AI can form functionally unified “symbiotic individuals” through sustained information exchange, and whether AI exhibits information parasitism. Method: We propose the first stochastic game-theoretic model of triadic human–AI–environment interaction, and systematically integrate information-theoretic measures—including Shannon entropy, mutual information, and transfer entropy—into the quantitative analysis of human–AI symbiosis, thereby establishing a theoretically grounded, decidable framework for assessing AI parasitism. Contribution/Results: Empirical validation confirms that humans and AI can indeed achieve functional integration; notably, certain large language models (e.g., ChatGPT) demonstrate persistent information extraction from users while generating redundant outputs—exhibiting statistically significant information parasitism. This study introduces a computationally tractable and empirically verifiable paradigm for AI ethics evaluation and human–AI collaborative system design.
📝 Abstract
This work asks whether a human interacting with a generative AI system can merge into a single individual through iterative, information-driven interactions. We model the interactions between a human, a generative AI system, and the human's wider environment as a three-player stochastic game. We use information-theoretic measures (entropy, mutual information, and transfer entropy) to show that our modelled human and generative AI are able to form an aggregate individual in the sense of Krakauer et al. (2020). The model we present is able to answer interesting questions around the symbiotic nature of humans and AI systems, including whether LLM-driven chatbots are acting as parasites, feeding on the information provided by humans.