Estimating controlled direct effects

This vignette demonstrates how to estimate the controlled direct effect of some treatment, net the effect of some mediator using sequential g-estimation approach as described by Acharya, Blackwell, and Sen (2016). With standard regression and matching approaches, estimating the controlled direct effect is difficult because it often requires adjusting for covariates that causally affect the mediator and are causally affected by the treatment. Including such variables in a regression model can lead to post-treatment bias when estimating the direct effect of treatment. A large class of models and estimation approaches have been developed to avoid this problem, but here we focus on one called sequential g-estimation. This approach uses a standard linear model to estimate the effect of the mediator, and then removes that effect from effect from the dependent variable to estimate the direct effect of treatment without post-treatment bias.

Quantity of interest

Here we briefly introduce some of the statistical theory behind estimating direct effects. The main quantity of interest is the average controlled direct effect or ACDE. This can be illustrated from the following causal structure (Acharya, Blackwell, and Sen 2016, fig. 3):

DAG with direct effects structure
DAG with direct effects structure

The average controlled direct effect defined for a given treatment (Ai = a vs. Ai = a) and a given value of the mediator (Mi = m) is ACDE(a, a, m) = E[Yi(a, m) − Yi(a, m)] and is the total of the dashed lines in the figure above. Thus, we hold the mediator constant when comparing the outcome under each treatment. The ACDE is useful for discriminating between causal mechanisms, because the average total effect of treatment, τ(a, a) ≡ E[Yi(a) − Yi(a)], can be decomposed as the sum of three quantities: (1) the ACDE, (2) the average natural indirect effect, and (3) a reference interaction that measures of how much the direct effect of Ai depends on a particular Mi = m. Thus, if the ACDE is non-zero, it is evidence that the effect of Ai is not entirely explained by Mi. We illustrate this with an empirical example. For more information on this, see Acharya, Blackwell, and Sen (2016).

Implementation

This vignette introduces a new way to design, specify, and implement various estimators for controlled direct effects using a coding style that we refer to as the “assembly line.” Users can specify the various components of the estimators sequentially and easily swap out implementation details such as what exact estimators are used for the estimation of each nuisance function or what arguments to pass to these functions. A unified structure and output across different research designs and estimators should make comparing estimators very easy. Finally, this implementation is built around the idea of cross-fitting for nuisance function estimation.

Let’s see how this works in practice. We first load a subset of the jobcorps data from the package:

data("jobcorps")
jc <- jobcorps[
  1:200,
  c("treat", "female", "age_cat", "work2year2q", "pemplq4", "emplq4", "exhealth30")
]

This is the Job Corps experiment data used in Huber (2014) to estimate the effect of a randomized job training program on self-assessed health holding constant the intermediate outcome of employment. We use a subset of the variables from their study, but the basic variable we use here:

  • Yi is an indicator for whether participants reported “very good” health after 2.5 years out after randomization (exhealth30)
  • Ai is an indicator for assignment to the job training program (treat)
  • Mi is an indicator for employment 1 to 1.5 years after assignment (work2year2q)
  • Zi are the post-treatment, pre-mediator intermediate confounders (emplq4, pemplq4)
  • Xi are the pre-treatment characteristics (female, age_cat)

In context of our CDE estimators, we refer to be Ai (the treatment) and Mi (the mediator) as treatment variables because we are interested in causal effects that contrast different combinations of these two variables.

The first step in our assembly line is to choose the type of CDE estimator (or really estimation strategy) that we want to pursue. These choices include IPW (cde_ipw()), regression imputation (cde_reg_impute()), and augmented IPW (cde_aipw()) among others. Some of these different functions will take different arguments to specify different aspects of the estimator. The AIPW estimator is useful to demonstrate because it uses many of the features of the assembly line approach.

my_aipw <- cde_aipw() |>
  set_treatment(treat, ~ female + age_cat) |>
  treat_model(engine = "logit") |>
  outreg_model(engine = "lm") |>
  set_treatment(work2year2q, ~ emplq4 + pemplq4) |>
  treat_model(engine = "logit") |>
  outreg_model(engine = "lm") |>
  estimate(exhealth30 ~ treat + work2year2q, data = jobcorps)

The call here consists of several steps connected by R’s pipe operator. The first call cde_aipw() simply initializes the estimator as taking the AIPW form, meaning that it will use both propensity score models and outcome regressions to estimate the causal effects.

Following this we have two blocks of 3 lines each that look similar with a few differences. These “treatment blocks” specify the causal variables of interest in their causal ordering. In set_treatment() we set the name of the treatment variable and give a formula describing the pre-treatment covariates for this treatment. This formula, unless overridden by the next two functions, will be used in the fitting of the propensity score and outcome regression models. treat_model() allows the user to specify how to fit the propensity score model for this treatment with a choice of engine and optional arguments to pass to each engine. outreg_model() does the same for the outcome regression. We repeat this specification for the mediator.

The estimate() function finally implements the specifications given in the previous commands. Users use a formula to set the outcome and which of the treatment variables we want effects for and indicate the data frame where all of these variables can be found. By default, the nuisance functions (the propensity score model and the outcome regression) are fit using cross-fitting.

We can either use the summary() or tidy() functions to get a summary of the different effects being estimated:

summary(my_aipw)
## 
##  aipw CDE Estimator
## ---------------------------
## Causal variable: treat 
## 
## Treatment model: treat ~ female + age_cat 
## Treatment engine: logit 
## 
## Outcome model: ~female + age_cat 
## Outcome engine: lm 
## ---------------------------
## Causal variable: work2year2q 
## 
## Treatment model: work2year2q ~ emplq4 + pemplq4 + female + age_cat 
## Treatment engine: logit 
## 
## Outcome model: ~emplq4 + pemplq4 + female + age_cat 
## Outcome engine: lm 
## ---------------------------
## Cross-fit: TRUE 
## Number of folds: 5 
## 
## Estimated Effects:
##                                  Estimate Std. Error  t value Pr(>|t|) CI Lower
## treat [(1, 0) vs. (0, 0)]        0.030671    0.02262  1.35583  0.17523 -0.01368
## treat [(1, 1) vs. (0, 1)]        0.029219    0.01397  2.09210  0.03647  0.00184
## work2year2q [(0, 1) vs. (0, 0)]  0.002397    0.02071  0.11573  0.90787 -0.03821
## work2year2q [(1, 1) vs. (1, 0)] -0.001117    0.01668 -0.06698  0.94660 -0.03381
##                                 CI Upper   DF
## treat [(1, 0) vs. (0, 0)]        0.07502 3818
## treat [(1, 1) vs. (0, 1)]        0.05660 6207
## work2year2q [(0, 1) vs. (0, 0)]  0.04300 3991
## work2year2q [(1, 1) vs. (1, 0)]  0.03157 6034
tidy(my_aipw)
##                                            term     estimate  std.error
## treat_1_0             treat [(1, 0) vs. (0, 0)]  0.030670749 0.02262139
## treat_1_1             treat [(1, 1) vs. (0, 1)]  0.029218775 0.01396621
## work2year2q_0_1 work2year2q [(0, 1) vs. (0, 0)]  0.002396726 0.02070928
## work2year2q_1_1 work2year2q [(1, 1) vs. (1, 0)] -0.001116924 0.01667574
##                   statistic    p.value     conf.low  conf.high   df
## treat_1_0        1.35582944 0.17523363 -0.013680424 0.07502192 3818
## treat_1_1        2.09210412 0.03646973  0.001840159 0.05659739 6207
## work2year2q_0_1  0.11573201 0.90787076 -0.038205024 0.04299848 3991
## work2year2q_1_1 -0.06697896 0.94660067 -0.033807332 0.03157348 6034

Because the estimate() function included both treat and work2year2q in the formula, the output includes both the controlled direct effects of the treatment and the conditional average treatment effect of the mediator. Furthermore, by default, the output contains estimates for an effect for each combination of other treatment variables. This can be changed by setting the eval_vals argument in the relevant set_treatment() call.

The modular structure of the call allow users to easily swap out parts of the models. For example, we could easily alter the above call to use lasso versions of the regression and logit model.

my_aipw_lasso <- cde_aipw() |>
  set_treatment(treat, ~ female + age_cat) |>
  treat_model(engine = "rlasso_logit") |>
  outreg_model(engine = "rlasso") |>
  set_treatment(work2year2q, ~ emplq4 + pemplq4) |>
  treat_model(engine = "rlasso_logit") |>
  outreg_model(engine = "rlasso") |>
  estimate(exhealth30 ~ treat + work2year2q, data = jobcorps)

Here we use a “rigorous” version of the lasso from the hdm package to speed up computation. There are a number of different engines for different outcome and treatment types including:

  • GLMs: lm (OLS), logit (logistic regression), multinom (mulitnomial logit)
  • LASSO estimators (from glmnet): lasso, lasso_logit, lasso_multinom
  • Rigorous LASSO estimator (from hdm): rlasso, rlasso_logit

References

Acharya, Avidit, Matthew Blackwell, and Maya Sen. 2016. “Explaining Causal Findings Without Bias: Detecting and Assessing Direct Effects.” American Political Science Review 110 (3): 512–29. https://doi.org/10.1017/S0003055416000216.
Huber, Martin. 2014. “Identifying Causal Mechanisms (Primarily) Based on Inverse Probability Weighting.” Journal of Applied Econometrics 29 (6): 920–43.