How to Build Efficient Causal Effect Estimators for Exponential‑Family Outcomes
This article presents a unified framework for efficiently estimating causal treatment effects on exponential‑family outcomes, extending target regularization beyond Gaussian assumptions, deriving bias analysis for plug‑in estimators, proposing DR and TMLE‑based estimators, and validating them on synthetic and real datasets.
Background
Efficient causal inference requires estimating target estimands such as the average treatment effect (ATE) under the standard assumptions (SUTVA, unconfoundedness, overlap). The goal is to construct an efficient (minimum‑variance) unbiased estimator. Within the semiparametric framework, doubly robust (DR) and targeted maximum likelihood estimation (TMLE) are two widely studied approaches. Dragonnet introduced target regularization to embed TMLE theory into neural‑network loss functions for binary treatments, and VCNet extended it to continuous treatments.
Motivation
Existing neural‑network‑based efficient estimators are limited to Gaussian outcome distributions, whereas many practical scenarios involve Bernoulli, Poisson, or other exponential‑family outcomes. We aim to generalize target regularization to arbitrary exponential‑family outcomes.
Problem Setting
Let A denote a one‑dimensional treatment (binary or continuous), X the covariates, and Y the outcome following a single‑parameter exponential dispersion family (EDF) with cumulant function B(·), dispersion φ, normalizing term c(·), and natural parameter η. The conditional mean of Y is μ = B′(η).
Under the EDF, the average dose canonical function (ADCF) serves as a unified definition of the causal estimand.
Plug‑in Estimator and Bias Analysis
The plug‑in estimator follows the Dragonnet/VCNet architecture. Its loss consists of the negative log‑likelihood of the EDF and a generalized propensity score term. After training, the plug‑in estimator is constructed, and a von‑Mises expansion of the ADCF reveals that the bias is dominated by a first‑order term involving the product of errors in nuisance functions.
Construction of Efficient Estimators
We correct the first‑order bias of the plug‑in estimator, yielding a DR estimator for exponential‑family outcomes. TMLE further perturbs the original distribution to achieve zero first‑order bias, leading to a new loss that adds a target‑regularization term whose derivative equals the first‑order bias. When this term converges, the derivative vanishes, guaranteeing an efficient estimator.
Specific target‑regularization forms are derived for Gaussian, Bernoulli, and Poisson outcomes by substituting the appropriate cumulant and link functions.
Experiments
For Gaussian outcomes, our method coincides with the original Dragonnet/VCNet regularization and is validated in prior work. For Bernoulli and Poisson outcomes, we evaluate on synthetic data and semi‑synthetic datasets (News, TCGA), using ATE (binary treatment) and dose‑response metrics (continuous treatment). Results demonstrate superior performance over baselines.
Conclusion
We have developed a unified approach to construct efficient estimators for exponential‑family outcomes by extending target regularization. Theoretical analysis provides bias decomposition and convergence rates, and empirical studies confirm the effectiveness of the proposed models.
References
Shi C, Blei D, Veitch V. Adapting neural networks for the estimation of treatment effects. NeurIPS 2019.
Nie L, Ye M, Nicolae D. VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments. ICLR.
Li J, Yang Z, Dan J, et al. Treatment Effect Estimation for Exponential Family Outcomes using Neural Networks with Targeted Regularization. arXiv 2025.
Gao Z, Hastie T. Estimating heterogeneous treatment effects for general responses. arXiv 2021.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
