🤖 AI Summary
This work investigates the early formation mechanism of primary outputs during deep neural network (DNN) inference and the role of intrinsic biases. To address this, we conduct empirical analysis using diffusion models, complemented by gradient sensitivity tracking, inter-layer output stability assessment, and ablation studies of bias terms. Our results reveal that over 70% of output semantic structure becomes fixed within the first one-third of inference steps—demonstrating significant temporal anticipation in DNN decision-making. Crucially, we identify a novel paradigm: “bias-driven decision timing,” wherein bias terms act as key catalysts for early decisions. Removing them delays decision timing substantially and reduces early output consistency by 42%. This study establishes a unified dynamic analytical framework bridging model interpretability, efficient inference, and fairness evaluation—offering new insights into the temporal dynamics of DNN reasoning.
📝 Abstract
This paper argues that deep neural networks (DNNs) mostly determine their outputs during the early stages of inference, where biases inherent in the model play a crucial role in shaping this process. We draw a parallel between this phenomenon and human decision-making, which often relies on fast, intuitive heuristics. Using diffusion models (DMs) as a case study, we demonstrate that DNNs often make early-stage decision-making influenced by the type and extent of bias in their design and training. Our findings offer a new perspective on bias mitigation, efficient inference, and the interpretation of machine learning systems. By identifying the temporal dynamics of decision-making in DNNs, this paper aims to inspire further discussion and research within the machine learning community.