Counterfactual inference with hidden confounders using implicit generative models

Publication Type:
Chapter
Citation:
2018, 11320 LNAI pp. 519 - 530
Issue Date:
2018-01-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
AI2018-CameraReady.pdfAccepted Manuscript version358.39 kB
Adobe PDF
© Springer Nature Switzerland AG 2018. In observational studies, a key problem is to estimate the causal effect of a treatment on some outcome. Counterfactual inference tries to handle it by directly learning the treatment exposure surfaces. One of the biggest challenges in counterfactual inference is the existence of unobserved confounders, which are latent variables that affect both the treatment and outcome variables. Building on recent advances in latent variable modelling and efficient Bayesian inference techniques, deep latent variable models, such as variational auto-encoders (VAEs), have been used to ease the challenge by learning the latent confounders from the observations. However, for the sake of tractability, the posterior of latent variables used in existing methods is assumed to be Gaussian with diagonal covariance matrix. This specification is quite restrictive and even contradictory with the underlying truth, limiting the quality of the resulting generative models and the causal effect estimation. In this paper, we propose to take advantage of implicit generative models to detour this limitation by using black-box inference models. To make inference for the implicit generative model with intractable likelihood, we adopt recent implicit variational inference based on adversary training to obtain a close approximation to the true posterior. Experiments on simulated and real data show the proposed method matches the state-of-art.
Please use this identifier to cite or link to this item: