You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Causal Effect Estimation Relies heavily on identifying a valid adjustment set. For example, when estimating direct effects in DAGs under causal sufficiency and linearity assumptions, regressing the effect variable on all parents of the effect variable will provide an unbiased estimator for the true causal effect. However, whenever an edge is wrongfully estimated, this can lead to heavily biased direct effect estimation, which we see for real data or toy models with built-in assumption violations. Wrong orientations can occur due to small sample effects (statistical tests indicating a wrong result), faithfulness assumption violations or the PC algorithms being applied to data with hidden confounding (in which one should have used the FCI algorithm). Think about how to indicate this uncertainty whenever there are orientation conflicts.
Causal Effect Estimation Relies heavily on identifying a valid adjustment set. For example, when estimating direct effects in DAGs under causal sufficiency and linearity assumptions, regressing the effect variable on all parents of the effect variable will provide an unbiased estimator for the true causal effect. However, whenever an edge is wrongfully estimated, this can lead to heavily biased direct effect estimation, which we see for real data or toy models with built-in assumption violations. Wrong orientations can occur due to small sample effects (statistical tests indicating a wrong result), faithfulness assumption violations or the PC algorithms being applied to data with hidden confounding (in which one should have used the FCI algorithm). Think about how to indicate this uncertainty whenever there are orientation conflicts.
For example, consider this toy model:
The text was updated successfully, but these errors were encountered: