NanoKnow: How to Know What Your Language Model Knows

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the opacity of parametric knowledge in large language models (LLMs) by introducing NanoKnow, a benchmark constructed using the fully open-source NanoChat model trained on transparent pretraining data. The benchmark categorizes questions from Natural Questions and SQuAD based on whether their answers appear in the pretraining corpus, enabling the first clear disentanglement of parametric knowledge from external knowledge sources. Through closed-book and evidence-augmented question answering experiments across eight NanoChat checkpoints, combined with frequency statistics and contextual interference analysis, the study finds that closed-book accuracy strongly depends on answer frequency in the pretraining data. While external evidence partially mitigates this dependency, it cannot fully substitute for parametric knowledge. Moreover, both the quantity and placement of irrelevant context significantly degrade model performance.

Technology Category

Application Category

📝 Abstract
How do large language models (LLMs) know what they know? Answering this question has been difficult because pre-training data is often a "black box" -- unknown or inaccessible. The recent release of nanochat -- a family of small LLMs with fully open pre-training data -- addresses this as it provides a transparent view into where a model's parametric knowledge comes from. Towards the goal of understanding how knowledge is encoded by LLMs, we release NanoKnow, a benchmark dataset that partitions questions from Natural Questions and SQuAD into splits based on whether their answers are present in nanochat's pre-training corpus. Using these splits, we can now properly disentangle the sources of knowledge that LLMs rely on when producing an output. To demonstrate NanoKnow's utility, we conduct experiments using eight nanochat checkpoints. Our findings show: (1) closed-book accuracy is strongly influenced by answer frequency in the pre-training data, (2) providing external evidence can mitigate this frequency dependence, (3) even with external evidence, models are more accurate when answers were seen during pre-training, demonstrating that parametric and external knowledge are complementary, and (4) non-relevant information is harmful, with accuracy decreasing based on both the position and the number of non-relevant contexts. We release all NanoKnow artifacts at https://github.com/castorini/NanoKnow.
Problem

Research questions and friction points this paper is trying to address.

large language models
parametric knowledge
pre-training data
knowledge source
model transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

knowledge disentanglement
transparent pre-training data
parametric knowledge
external evidence
frequency dependence
🔎 Similar Papers
No similar papers found.
L
Lingwei Gu
University of Waterloo
N
Nour Jedidi
University of Waterloo
Jimmy Lin
Jimmy Lin
University of Waterloo
information retrievalnatural language processingdata managementbig data