Density estimation with LLMs: a geometric investigation of in-context learning trajectories

๐Ÿ“… 2024-10-07
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates the implicit probability density estimation (PDF estimation) mechanism underlying in-context learning (ICL) in large language models (LLMs). Addressing the fundamental question of how LLMs model distributions, the authors formalize ICL-based density estimation as a **two-parameter adaptive kernel density estimation (KDE)**, wherein kernel bandwidth and shape dynamically adapt to input promptsโ€”revealing an implicit, geometric probabilistic reasoning capability. Methodologically, they introduce Intensive PCA (InPCA) for dimensionality reduction and visualization of ICL trajectories in LLaMA-2, observing convergence onto similar low-dimensional manifolds across model scales; they further design a lightweight two-parameter model that successfully replicates LLM density estimation behavior. Key contributions include: (i) the first theoretical linkage between LLM density estimation and adaptive KDE; (ii) uncovering its intrinsic geometric nature; and (iii) open-sourcing implementation code and an interactive 3D trajectory visualization tool.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) demonstrate remarkable emergent abilities to perform in-context learning across various tasks, including time series forecasting. This work investigates LLMs' ability to estimate probability density functions (PDFs) from data observed in-context; such density estimation (DE) is a fundamental task underlying many probabilistic modeling problems. We leverage the Intensive Principal Component Analysis (InPCA) to visualize and analyze the in-context learning dynamics of LLaMA-2 models. Our main finding is that these LLMs all follow similar learning trajectories in a low-dimensional InPCA space, which are distinct from those of traditional density estimation methods like histograms and Gaussian kernel density estimation (KDE). We interpret the LLaMA in-context DE process as a KDE with an adaptive kernel width and shape. This custom kernel model captures a significant portion of LLaMA's behavior despite having only two parameters. We further speculate on why LLaMA's kernel width and shape differs from classical algorithms, providing insights into the mechanism of in-context probabilistic reasoning in LLMs. Our codebase, along with a 3D visualization of an LLM's in-context learning trajectory, is publicly available at https://github.com/AntonioLiu97/LLMICL_inPCA
Problem

Research questions and friction points this paper is trying to address.

Investigates LLMs' ability to estimate probability density functions (PDFs) from in-context data.
Analyzes in-context learning dynamics of LLaMA-2 models using Intensive Principal Component Analysis (InPCA).
Explains LLaMA's adaptive kernel width and shape in density estimation compared to classical methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses InPCA to analyze LLM learning dynamics
Interprets LLM density estimation as adaptive KDE
Provides 3D visualization of learning trajectories
๐Ÿ”Ž Similar Papers
No similar papers found.
T
T. J. Liu
Cornell University, USA
N
Nicolas Boull'e
Imperial College London, UK
R
Raphael Sarfati
Cornell University, USA
C
C. Earls
Cornell University, USA