Archive for the ‘reviews’ Category

Causality, Probability, and Time (by Kleinberg)—a review

8 August 2014 Leave a comment

Kleinberg, Samantha
Causality, probability, and time. Cambridge University Press, Cambridge, 2013. viii+259 pp. ISBN: 978-1-107-02648-3
60A99 (03A05 03B48 62A01 62P99 68T27 91G80 92C20)

This informative and engaging book introduces a novel method of inferring a cause of an event on the basis of the assumption that each cause changes the frequency-type probability of some effect occurring later in time. Unlike most previous approaches to causal inference, the author explicitly models time lags between causes and effects since timing is often crucial to effective prediction and control.
Arguably an equally valuable contribution of the book is its integration of relevant work in philosophy, computer science, and statistics. While the first two disciplines have benefited from the productive interactions exemplified in [J. Pearl, Probabilistic reasoning in intelligent systems: networks of plausible inference, Morgan Kaufmann Ser. Represent. Reason., Morgan Kaufmann, San Mateo, CA, 1988; MR0965765 (90g:68003)] and [J. Williamson, Bayesian nets and causality, Oxford Univ. Press, Oxford, 2005; MR2120947 (2005k:68198)], the statistics community has developed its own theory of causal inference in relative isolation. Rather than following S. L. Morgan and C. Winship [Counterfactuals and causal inference: methods and principles for social research, Cambridge Univ. Press, New York, 2007] and others in bringing that theory into conversation with that of Pearl [op. cit.], the author creatively employs recent developments in statistical inference to identify causes.
For the specific situation in which many putative causes are tested but only a few are true causes, she explains how to estimate the local rate of discovering false causes. In this context, the local false discovery rate (LFDR) corresponding to a putative cause is a posterior probability that it is not a true cause. This is an example of an empirical Bayes method in that the prior distribution is estimated from the data rather than assigned.
Building on [P. Suppes, A probabilistic theory of causality, North-Holland, Amsterdam, 1970; MR0465774 (57 #5663)], the book emphasizes the importance for prediction not only of whether something is a cause but also of the strength of a cause. A cause is εsignificant if its causal strength, defined in terms of changing the probability of its effect, is at least ε, where ε is some nonnegative number. Otherwise, it is ε-insignificant.
The author poses an important problem and comes close to solving it, i.e., the problem of inferring whether a cause is ε-significant. The solution attempted in Section 4.2 confuses causal significance (ε-significance) with statistical significance (LFDR estimate below some small positive number α). This is by no means a fatal criticism of the approach since it can be remedied in principle by defining a false discovery as a discovery of an ε-insignificant cause. This tests the null hypothesis that the cause is ε-insignificant for a specified value of ε rather than the book’s null hypothesis, which in effect asserts that the cause is limε0ε-insignificant, i.e., ε-insignificant for all ε>0. In the case of a specified value of ε, a cause should be considered ε-significant if the estimated LFDR is less than α, provided that the LFDR is defined in terms of the null hypothesis of ε-insignificance. The need to fill in the technical details and to answer more general questions arising from this distinction between causal significance and statistical significance opens up exciting opportunities for further research guided by insights from the literature on seeking substantive significance as well as statistical significance [see, e.g., M. A. van de Wiel and K. I. Kim, Biometrics 63 (2007), no. 3, 806–815; MR2395718].

Reviewed by David R. Bickel

This review first appeared at Causality, Probability, and Time (Mathematical Reviews) and is used with permission from the American Mathematical Society.

Categories: empirical Bayes, reviews

Multivariate mode estimation

1 February 2014 Leave a comment

Hsu, Chih-Yuan; Wu, Tiee-Jian
Efficient estimation of the mode of continuous multivariate data. (English summary)
Comput. Statist. Data Anal. 63 (2013), 148–159.
62F10 (62F12)

To estimate the mode of a unimodal multivariate distribution, the authors propose the following algorithm. First, the data are transformed to become approximately multivariate normal by means of a transformation determined by maximum likelihood estimation (MLE) of a transformation parameter joint with the parameters of the multivariate normal distribution. Second, the resulting inverse transformation is applied to the MLE multivariate normal density function, yielding an estimate of the probability density function on the space of the original data. Third, the point at which that density function achieves its maximum is taken as the estimate of the multivariate mode. The paper features a theorem reporting the weak consistency of the estimator under the lognormality of the data.
The authors cite several papers indicating the need for such multivariate mode estimation in applications. They illustrate the practical use of their estimator by applying it to climatology and handwriting data sets.
Simulations indicate a large variety of distributions and dependence structures under which the proposed estimator performs substantially better than its competitors. An exception is the case of contamination with data from a distribution that has a different mode than the mode that is the target of inference.

Reviewed by David R. Bickel

This review first appeared at “Efficient estimation of the mode of continuous multivariate data” (Mathematical Reviews) and is used with permission from the American Mathematical Society.

Categories: reviews

Integrated likelihood in light of de Finetti

13 January 2014 Leave a comment

Coletti, Giulianella; Scozzafava, Romano; Vantaggi, Barbara
Integrated likelihood in a finitely additive setting. (English summary) Symbolic and quantitative approaches to reasoning with uncertainty, 554–565, Lecture Notes in Comput. Sci., 5590, Springer, Berlin, 2009.
62A01 (62A99)

For an observed sample of data, the likelihood function specifies the probability or probability density of that observation as a function of the parameter value. Since each sample hypothesis corresponds to a single parameter value, the likelihood of any simple hypothesis is an uncontroversial function of the data and the model. However, there is no standard definition of the likelihood of a composite hypothesis, which instead corresponds to multiple parameter values. Such a definition could be useful not only for quantifying the strength of statistical evidence in favor of composite hypotheses that are faced in both science and law, but also for likelihood-based measures of corroboration and of explanatory power for epistemological research involving Popper’s critical rationalism or recent accounts of inference to the best explanation.
Interpreting the likelihood function under the coherence framework of de Finetti, this paper mathematically formulates the problem by defining the likelihood of a simple or composite hypothesis as a subjective probability of the observed data conditional on the truth of the hypothesis. In the probability theory of this framework, conditional probabilities given a hypothesis or event of probability zero are well defined, even for finite parameter sets. That differs from the familiar probability measures that Kolmogorov introduced for frequency-type probabilities, each of which, in the finite case, can only have zero probability mass if its event cannot occur. (The latter but not the former agrees in spirit with Cournot’s principle that an event of infinitesimally small probability is physically impossible.) Thus, in the de Finetti framework, the likelihood function assigns a conditional probability to each simple hypothesis, whether or not its probability is zero.
When the parameter set is finite, every coherent conditional probability of a sample of discrete data given a composite hypothesis is a weighted arithmetic mean of the conditional probabilities of the simple hypotheses that together constitute the composite hypothesis. In other words, the coherence constraint requires that the likelihood of a composite hypothesis be a linear combination of the likelihoods of its constituent simple hypotheses. Important special cases include the maximum and the minimum of the likelihood over the parameter set. They are made possible in the non-Kolmogorov framework by assigning zero probability to all of the simple hypotheses except those of maximum or minimum likelihood.
The main result of the paper extends this result to infinite parameter sets. In general, the likelihood of a composite hypothesis is a mixture of the likelihoods of its component simple hypotheses.

{For the entire collection see MR2907743 (2012j:68012).}

Reviewed by David R. Bickel

This review first appeared at “Integrated likelihood in a finitely additive setting” (Mathematical Reviews) and is used with permission from the American Mathematical Society.