In-Context Learning as Nonparametric Conditional Probability Estimation: Risk Bounds and Optimality

πŸ“… 2025-08-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper investigates conditional probability estimation in multiclass classification via in-context learning (ICL), evaluating performance via expected excess riskβ€”i.e., the average truncated KL divergence. We derive a novel KL-based oracle inequality and, for the first time in a nonparametric setting, establish that ICL estimators achieve the minimax optimal convergence rate for conditional probability estimation. Moreover, we show that under controlled empirical covering entropy, both Transformers and even standard MLPs attain this optimality. By unifying the analysis of the log-likelihood function classes of Transformers and MLPs, we derive tight upper and lower bounds on the expected excess risk. Our theoretical results rigorously confirm that ICL achieves statistically optimal efficiency under appropriate design conditions.

Technology Category

Application Category

πŸ“ Abstract
This paper investigates the expected excess risk of In-Context Learning (ICL) for multiclass classification. We model each task as a sequence of labeled prompt samples and a query input, where a pre-trained model estimates the conditional class probabilities of the query. The expected excess risk is defined as the average truncated Kullback-Leibler (KL) divergence between the predicted and ground-truth conditional class distributions, averaged over a specified family of tasks. We establish a new oracle inequality for the expected excess risk based on KL divergence in multiclass classification. This allows us to derive tight upper and lower bounds for the expected excess risk in transformer-based models, demonstrating that the ICL estimator achieves the minimax optimal rate - up to a logarithmic factor - for conditional probability estimation. From a technical standpoint, our results introduce a novel method for controlling generalization error using the uniform empirical covering entropy of the log-likelihood function class. Furthermore, we show that multilayer perceptrons (MLPs) can also perform ICL and achieve this optimal rate under specific assumptions, suggesting that transformers may not be the exclusive architecture capable of effective ICL.
Problem

Research questions and friction points this paper is trying to address.

Analyzing excess risk in In-Context Learning for multiclass classification
Establishing risk bounds for conditional probability estimation in transformers
Demonstrating optimal ICL performance achievable by non-transformer architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

KL divergence bounds for ICL risk
Transformer models achieve minimax optimality
MLPs perform ICL under specific conditions
πŸ”Ž Similar Papers
No similar papers found.
C
Chenrui Liu
Department of Statistics, Beijing Normal University at Zhuhai, Zhuhai, China
F
Falong Tan
Department of Statistics and Data Science, Hunan University, Changsha, China
Chuanlong Xie
Chuanlong Xie
Beijing Normal University
Y
Yicheng Zeng
School of Science, Sun Yat-sen University, Shenzhen, China
L
Lixing Zhu
Department of Statistics, Beijing Normal University at Zhuhai, Zhuhai, China