🤖 AI Summary
This work addresses the frequent inconsistency between generated content and context in large language models, a problem inadequately mitigated by existing defenses that rely on external validation or post-hoc correction. The authors propose an internal stream-signing mechanism that requires no modification to the original model. By monitoring deep-layer dynamics, the method extracts decision manifold features at fixed inter-layer boundaries to enable lightweight self-checking and localized correction. Integrating bias-center monitoring, orthogonal transport alignment, moving subspace trajectory summarization, and a GRU-based verifier, the approach efficiently pinpoints the depth location of erroneous generation. Faithfulness is further enhanced through rollback operations and anomaly-aware step-size clamping. The resulting framework exhibits basis invariance, low computational overhead, and high-precision error detection capability.
📝 Abstract
Large language models can generate fluent answers that are unfaithful to the provided context, while many safeguards rely on external verification or a separate judge after generation. We introduce \emph{internal flow signatures} that audit decision formation from depthwise dynamics at a fixed inter-block monitoring boundary. The method stabilizes token-wise motion via bias-centered monitoring, then summarizes trajectories in compact \emph{moving} readout-aligned subspaces constructed from the top token and its close competitors within each depth window. Neighboring window frames are aligned by an orthogonal transport, yielding depth-comparable transported step lengths, turning angles, and subspace drift summaries that are invariant to within-window basis choices. A lightweight GRU validator trained on these signatures performs self-checking without modifying the base model. Beyond detection, the validator localizes a culprit depth event and enables a targeted refinement: the model rolls back to the culprit token and clamps an abnormal transported step at the identified block while preserving the orthogonal residual. The resulting pipeline provides actionable localization and low-overhead self-checking from internal decision dynamics. \emph{Code is available at} \texttt{github.com/EavnJeong/Internal-Flow-Signatures-for-Self-Checking-and-Refinement-in-LLMs}.