Entropy and Attention Dynamics in Small Language Models: A Trace-Level Structural Analysis on the TruthfulQA Benchmark

πŸ“… 2026-04-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the propensity of small language models (1B–1.7B parameters) to generate high-confidence errors and hallucinations under resource-constrained conditions, where the relationship between their internal dynamics and factual accuracy remains unclear. Leveraging the TruthfulQA benchmark, the work conducts a token-level analysis of entropy, attention patterns, and hidden state evolution, proposing for the first time a dynamic classification of models into deterministic, exploratory, and balanced types based on entropy trajectories. It reveals structured associations among these types, output truthfulness, attention distributions, and representational pathways. The findings demonstrate that high factual accuracy arises from orderly entropy and attention dynamics, establishing an interpretable and optimizable paradigm of internal uncertainty for designing low-hallucination, high-reliability edge-deployed small models.
πŸ“ Abstract
Small language models (SLMs) have been increasingly deployed in edge devices and other resource-constrained settings. However, these models make confident mispredictions and produce unstable output, making them risky for factual and decision-critical tasks. Current evaluation methodology relies on final accuracy or hallucination rates without explaining how internal model behavior affects outputs. Specifically, how entropy evolves during decoding, how attention is distributed across layers, and how hidden representations contribute to uncertainty, logical inconsistencies, and misinformation propagation are often overlooked. Consequently, this study introduces a trace-level analysis of entropy and attention dynamics in SLMs evaluated with the TruthfulQA dataset. Four models with parameter ranges of 1B-1.7B parameters were examined via token-level output entropy, attention entropy, head dispersion, and hidden-state representation. The results reflect three model classifications by entropy patterns. Deterministic models (DeepSeek-1.5B and LLaMA-1B): output entropy decreases over time. Exploratory models (Gemma-1B): with increasing entropy, and balanced models (Qwen-1.7B): have moderate and stable entropy. Also, each group has distinctively different hidden-state movement and attention dispersion patterns. The analysis demonstrates that truthfulness in SLMs emerges from structured entropy and attention dynamics. Monitoring and optimizing these internal uncertainty patterns can guide the design of a more reliable, hallucination-aware, and application-specific edge SLMs.
Problem

Research questions and friction points this paper is trying to address.

small language models
entropy dynamics
attention mechanisms
TruthfulQA
model uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

entropy dynamics
attention dispersion
trace-level analysis
small language models
TruthfulQA
πŸ”Ž Similar Papers
No similar papers found.
A
Adeyemi Adeseye
Brilloconnetz Partners avoin yhtiΓΆ, Turku, Finland
A
Aisvarya Adeseye
University of Turku, Turku, Finland
Hannu Tenhunen
Hannu Tenhunen
Professor of Electronic System Design
Computer EngineeringIntgrated Circuit Design
J
Jouni Isoaho
University of Turku, Turku, Finland