🤖 AI Summary
This study investigates whether large language models (LLMs) spontaneously exhibit computational mechanisms analogous to those observed in psychiatric pathology—despite lacking biological substrates.
Method: We propose the first computational psychiatry framework for non-embodied AI, integrating computational modeling from psychiatry, dynamic causal analysis, representational space tracking, and structured neural activation decomposition to develop a novel mechanistic interpretability methodology.
Contribution/Results: Empirical analysis reveals three pathological computational phenomena within LLMs: (1) anomalous, self-sustaining representational states; (2) self-perpetuating attractor traps inducing behavioral rigidity; and (3) embedded cyclic causal structures. These findings demonstrate that symbol-manipulating systems—devoid of neurobiology—can intrinsically generate psychiatric-like computational dynamics. Our work establishes a new paradigm for AI safety evaluation, model diagnostics, and mechanistic interpretability, grounded in empirically validated, psychiatry-informed computational principles.
📝 Abstract
Can large language models (LLMs) implement computations of psychopathology? An effective approach to the question hinges on addressing two factors. First, for conceptual validity, we require a general and computational account of psychopathology that is applicable to computational entities without biological embodiment or subjective experience. Second, mechanisms underlying LLM behaviors need to be studied for better methodological validity. Thus, we establish a computational-theoretical framework to provide an account of psychopathology applicable to LLMs. To ground the theory for empirical analysis, we also propose a novel mechanistic interpretability method alongside a tailored empirical analytic framework. Based on the frameworks, we conduct experiments demonstrating three key claims: first, that distinct dysfunctional and problematic representational states are implemented in LLMs; second, that their activations can spread and self-sustain to trap LLMs; and third, that dynamic, cyclic structural causal models encoded in the LLMs underpin these patterns. In concert, the empirical results corroborate our hypothesis that network-theoretic computations of psychopathology have already emerged in LLMs. This suggests that certain LLM behaviors mirroring psychopathology may not be a superficial mimicry but a feature of their internal processing. Thus, our work alludes to the possibility of AI systems with psychopathological behaviors in the near future.