Causality, Probability, and Time (by Kleinberg)—a review
8 August 2014
Kleinberg, Samantha
Causality, probability, and time. Cambridge University Press, Cambridge, 2013. viii+259 pp. ISBN: 978-1-107-02648-3
60A99 (03A05 03B48 62A01 62P99 68T27 91G80 92C20)
This informative and engaging book introduces a novel method of inferring a cause of an event on the basis of the assumption that each cause changes the frequency-type probability of some effect occurring later in time. Unlike most previous approaches to causal inference, the author explicitly models time lags between causes and effects since timing is often crucial to effective prediction and control.
Arguably an equally valuable contribution of the book is its integration of relevant work in philosophy, computer science, and statistics. While the first two disciplines have benefited from the productive interactions exemplified in [J. Pearl, Probabilistic reasoning in intelligent systems: networks of plausible inference, Morgan Kaufmann Ser. Represent. Reason., Morgan Kaufmann, San Mateo, CA, 1988; MR0965765 (90g:68003)] and [J. Williamson, Bayesian nets and causality, Oxford Univ. Press, Oxford, 2005; MR2120947 (2005k:68198)], the statistics community has developed its own theory of causal inference in relative isolation. Rather than following S. L. Morgan and C. Winship [Counterfactuals and causal inference: methods and principles for social research, Cambridge Univ. Press, New York, 2007] and others in bringing that theory into conversation with that of Pearl [op. cit.], the author creatively employs recent developments in statistical inference to identify causes.
For the specific situation in which many putative causes are tested but only a few are true causes, she explains how to estimate the local rate of discovering false causes. In this context, the local false discovery rate (LFDR) corresponding to a putative cause is a posterior probability that it is not a true cause. This is an example of an empirical Bayes method in that the prior distribution is estimated from the data rather than assigned.
Building on [P. Suppes, A probabilistic theory of causality, North-Holland, Amsterdam, 1970; MR0465774 (57 #5663)], the book emphasizes the importance for prediction not only of whether something is a cause but also of the strength of a cause. A cause is ε–significant if its causal strength, defined in terms of changing the probability of its effect, is at least ε, where ε is some nonnegative number. Otherwise, it is ε-insignificant.
The author poses an important problem and comes close to solving it, i.e., the problem of inferring whether a cause is ε-significant. The solution attempted in Section 4.2 confuses causal significance (ε-significance) with statistical significance (LFDR estimate below some small positive number α). This is by no means a fatal criticism of the approach since it can be remedied in principle by defining a false discovery as a discovery of an ε-insignificant cause. This tests the null hypothesis that the cause is ε-insignificant for a specified value of ε rather than the book’s null hypothesis, which in effect asserts that the cause is limε→0ε-insignificant, i.e., ε-insignificant for all ε>0. In the case of a specified value of ε, a cause should be considered ε-significant if the estimated LFDR is less than α, provided that the LFDR is defined in terms of the null hypothesis of ε-insignificance. The need to fill in the technical details and to answer more general questions arising from this distinction between causal significance and statistical significance opens up exciting opportunities for further research guided by insights from the literature on seeking substantive significance as well as statistical significance [see, e.g., M. A. van de Wiel and K. I. Kim, Biometrics 63 (2007), no. 3, 806–815; MR2395718].
Arguably an equally valuable contribution of the book is its integration of relevant work in philosophy, computer science, and statistics. While the first two disciplines have benefited from the productive interactions exemplified in [J. Pearl, Probabilistic reasoning in intelligent systems: networks of plausible inference, Morgan Kaufmann Ser. Represent. Reason., Morgan Kaufmann, San Mateo, CA, 1988; MR0965765 (90g:68003)] and [J. Williamson, Bayesian nets and causality, Oxford Univ. Press, Oxford, 2005; MR2120947 (2005k:68198)], the statistics community has developed its own theory of causal inference in relative isolation. Rather than following S. L. Morgan and C. Winship [Counterfactuals and causal inference: methods and principles for social research, Cambridge Univ. Press, New York, 2007] and others in bringing that theory into conversation with that of Pearl [op. cit.], the author creatively employs recent developments in statistical inference to identify causes.
For the specific situation in which many putative causes are tested but only a few are true causes, she explains how to estimate the local rate of discovering false causes. In this context, the local false discovery rate (LFDR) corresponding to a putative cause is a posterior probability that it is not a true cause. This is an example of an empirical Bayes method in that the prior distribution is estimated from the data rather than assigned.
Building on [P. Suppes, A probabilistic theory of causality, North-Holland, Amsterdam, 1970; MR0465774 (57 #5663)], the book emphasizes the importance for prediction not only of whether something is a cause but also of the strength of a cause. A cause is ε–significant if its causal strength, defined in terms of changing the probability of its effect, is at least ε, where ε is some nonnegative number. Otherwise, it is ε-insignificant.
The author poses an important problem and comes close to solving it, i.e., the problem of inferring whether a cause is ε-significant. The solution attempted in Section 4.2 confuses causal significance (ε-significance) with statistical significance (LFDR estimate below some small positive number α). This is by no means a fatal criticism of the approach since it can be remedied in principle by defining a false discovery as a discovery of an ε-insignificant cause. This tests the null hypothesis that the cause is ε-insignificant for a specified value of ε rather than the book’s null hypothesis, which in effect asserts that the cause is limε→0ε-insignificant, i.e., ε-insignificant for all ε>0. In the case of a specified value of ε, a cause should be considered ε-significant if the estimated LFDR is less than α, provided that the LFDR is defined in terms of the null hypothesis of ε-insignificance. The need to fill in the technical details and to answer more general questions arising from this distinction between causal significance and statistical significance opens up exciting opportunities for further research guided by insights from the literature on seeking substantive significance as well as statistical significance [see, e.g., M. A. van de Wiel and K. I. Kim, Biometrics 63 (2007), no. 3, 806–815; MR2395718].
Reviewed by David R. Bickel
This review first appeared at Causality, Probability, and Time (Mathematical Reviews) and is used with permission from the American Mathematical Society.
Categories: empirical Bayes, reviews
You must be logged in to post a comment.