🤖 AI Summary
Existing methods for analyzing neural network activations—such as PCA and sparse autoencoders—rely on strong structural assumptions that limit their flexibility in modeling the internal states of large language models. This work proposes a generative metamodel framework that dispenses with such assumptions by training a diffusion model on billion-scale residual stream activations to learn their distribution and serve as an intervention prior. The approach substantially improves the fluency of activation interventions and the effectiveness of concept probing: diffusion loss reliably predicts downstream task utility, neurons exhibit increasingly distinct semantic separation during training, and sparse probing scores consistently rise, collectively demonstrating the method’s validity and scalability.
📝 Abstract
Existing approaches for analyzing neural network activations, such as PCA and sparse autoencoders, rely on strong structural assumptions. Generative models offer an alternative: they can uncover structure without such assumptions and act as priors that improve intervention fidelity. We explore this direction by training diffusion models on one billion residual stream activations, creating"meta-models"that learn the distribution of a network's internal states. We find that diffusion loss decreases smoothly with compute and reliably predicts downstream utility. In particular, applying the meta-model's learned prior to steering interventions improves fluency, with larger gains as loss decreases. Moreover, the meta-model's neurons increasingly isolate concepts into individual units, with sparse probing scores that scale as loss decreases. These results suggest generative meta-models offer a scalable path toward interpretability without restrictive structural assumptions. Project page: https://generative-latent-prior.github.io.