🤖 AI Summary
This work addresses the lack of intuitive visualizations of internal mechanisms in existing large language model (LLM) educational materials. To bridge this gap, we propose and implement an interactive, browser-based visualization system that dynamically renders layer-wise model states and attention patterns by leveraging precomputed execution traces from open-source Transformer models on carefully curated inputs. Our system constitutes the first purely front-end-driven interactive teaching tool for LLMs, integrating pedagogically designed inputs with fine-grained visualizations of internal model dynamics. This approach significantly enhances model interpretability and facilitates deeper learning. The platform is publicly accessible at https://animatedllm.github.io, offering effective support for natural language processing education.
📝 Abstract
Large language models (LLMs) are becoming central to natural language processing education, yet materials showing their mechanics are sparse. We present AnimatedLLM, an interactive web application that provides step-by-step visualizations of a Transformer language model. AnimatedLLM runs entirely in the browser, using pre-computed traces of open LLMs applied on manually curated inputs. The application is available at https://animatedllm.github.io, both as a teaching aid and for self-educational purposes.