A Unified Framework for Debiased Machine Learning: Riesz Representer Fitting under Bregman Divergence

📅 2026-01-12
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the efficient and accurate estimation of Riesz representers in debiased machine learning to enable robust inference of causal and structural parameters. To this end, it proposes the Generalized Riesz Regression (GRR) framework, which, for the first time, incorporates Bregman divergence minimization into representer modeling, thereby unifying approaches based on squared loss and KL divergence. Under suitable conditions, GRR automatically achieves covariate balance and Neyman orthogonality without requiring separate estimation of nuisance regression functions. The method integrates modeling in both reproducing kernel Hilbert spaces (RKHS) and neural networks, supported by duality analysis and density ratio estimation techniques, and provides convergence guarantees for both model classes. An open-source Python package, grr, is released to facilitate flexible and efficient Riesz representer estimation.

Technology Category

Application Category

📝 Abstract
Estimating the Riesz representer is central to debiased machine learning for causal and structural parameter estimation. We propose generalized Riesz regression, a unified framework for estimating the Riesz representer by fitting a representer model via Bregman divergence minimization. This framework includes various divergences as special cases, such as the squared distance and the Kullback--Leibler (KL) divergence, where the former recovers Riesz regression and the latter recovers tailored loss minimization. Under suitable pairs of divergence and model specifications (link functions), the dual problems of the Riesz representer fitting problem correspond to covariate balancing, which we call automatic covariate balancing. Moreover, under the same specifications, the sample average of outcomes weighted by the estimated Riesz representer satisfies Neyman orthogonality even without estimating the regression function, a property we call automatic Neyman orthogonalization. This property not only reduces the estimation error of Neyman orthogonal scores but also clarifies a key distinction between debiased machine learning and targeted maximum likelihood estimation (TMLE). Our framework can also be viewed as a generalization of density ratio fitting under Bregman divergences to Riesz representer estimation, and it applies beyond density ratio estimation. We provide convergence analyses for both reproducing kernel Hilbert space (RKHS) and neural network model classes. A Python package for generalized Riesz regression is released as genriesz and is available at https://github.com/MasaKat0/genriesz.
Problem

Research questions and friction points this paper is trying to address.

Riesz representer
debiased machine learning
causal inference
structural parameter estimation
Bregman divergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Riesz representer
Bregman divergence
Neyman orthogonality
covariate balancing
debiased machine learning
🔎 Similar Papers
No similar papers found.