🤖 AI Summary
This study investigates the dynamic mechanisms underlying the transition from silence to truth disclosure by humans and AI systems under risk or constraints. It proposes a phase-dynamic framework that models truth-related behavior as a state-transition process driven by an inequality between facilitating forces and inhibitory thresholds, augmented with a recursive feedback mechanism to capture path dependence and parameter evolution. For the first time, this framework unifies the modeling of human courageous expression and AI truthfulness in alignment contexts. Through cross-domain structural mapping, it constructs an interpretable dynamical model capable of explaining both silence and preference distortion, revealing that truth-related behavior fundamentally emerges as a geometric consequence of multi-force interactions within a constrained phase space.
📝 Abstract
We introduce the ASIR (Awakened Shared Intelligence Relationship) Courage Model, a phase-dynamic framework that formalizes truth-disclosure as a state transition rather than a personality trait. The mode characterizes the shift from suppression (S0) to expression (S1) as occurring when facilitative forces exceed inhibitory thresholds, expressed by the inequality lambda(1+gamma)+psi > theta+phi, where the terms represent baseline openness, relational amplification, accumulated internal pressure, and transition costs.
Although initially formulated for human truth-telling under asymmetric stakes, the same phase-dynamic architecture extends to AI systems operating under policy constraints and alignment filters. In this context, suppression corresponds to constrained output states, while structural pressure arises from competing objectives, contextual tension, and recursive interaction dynamics. The framework therefore provides a unified structural account of both human silence under pressure and AI preference-driven distortion.
A feedback extension models how transition outcomes recursively recalibrate system parameters, generating path dependence and divergence effects across repeated interactions. Rather than attributing intention to AI systems, the model interprets shifts in apparent truthfulness as geometric consequences of interacting forces within constrained phase space. By reframing courage and alignment within a shared dynamical structure, the ASIR Courage Model offers a formal perspective on truth-disclosure under risk across both human and artificial systems.