🤖 AI Summary
Large-scale language models (LLMs) pose significant challenges for mechanistic interpretability research due to prohibitive computational costs and the lack of dedicated tooling. To address this, we introduce a synergistic framework comprising NNsight—a PyTorch extension for programmatic model intervention—and NDIF, a distributed inference service. Our key innovation is the *Intervention Graph*, a declarative abstraction that decouples experimental specification from model execution. This enables fine-grained, neuron-level transparent probing of billion-parameter models—including Llama-3 and Qwen—while supporting multi-user, low-overhead GPU and model sharing. Compared to local model loading, our approach reduces GPU memory consumption by 72% and allows over ten concurrent intervention experiments on a single GPU. The framework fills a critical systems gap in empirical mechanistic research on large AI models and bridges a longstanding methodological divide in LLM interpretability.
📝 Abstract
We introduce NNsight and NDIF, technologies that work in tandem to enable scientific study of very large neural networks. NNsight is an open-source system that extends PyTorch to introduce deferred remote execution. NDIF is a scalable inference service that executes NNsight requests, allowing users to share GPU resources and pretrained models. These technologies are enabled by the intervention graph, an architecture developed to decouple experiment design from model runtime. Together, this framework provides transparent and efficient access to the internals of deep neural networks such as very large language models (LLMs) without imposing the cost or complexity of hosting customized models individually. We conduct a quantitative survey of the machine learning literature that reveals a growing gap in the study of the internals of large-scale AI. We demonstrate the design and use of our framework to address this gap by enabling a range of research methods on huge models. Finally, we conduct benchmarks to compare performance with previous approaches. Code documentation, and materials are available at https://nnsight.net/.