Archive

Archive for the ‘imprecise probability’ Category

Recent publications by David Bickel

1 May 2019 Leave a comment

Recent preprints by David Bickel

1 April 2019 Leave a comment

R functions for combining probabilities using game theory

15 November 2018 Leave a comment

Inference after eliminating Bayesian models of excessive codelength

1 November 2017 Leave a comment

A Bayesian approach to informing decision makers

23 September 2016 Leave a comment

Understanding Uncertainty (by Lindley)—a review

1 October 2015 Leave a comment

Lindley, Dennis V.
Understanding uncertainty.
Revised edition. Wiley Series in Probability and Statistics. John Wiley & Sons, Inc., Hoboken, NJ, 2014. xvi+393 pp. ISBN: 978-1-118-65012-7
62A99 (62C05 62C10)

In Understanding uncertainty, Dennis Lindley ably defends subjective Bayesianism, the thesis that decisions in the presence of uncertainty can only be guaranteed to cohere if made according to probabilities as degrees of someone’s beliefs. True to form, he excludes all other mathematical theories of modeling uncertainty, including subjective theories of imprecise probability that share the goal of coherent decision making [see M. C. M. Troffaes and G. de Cooman, Lower previsions, Wiley Ser. Probab. Stat., Wiley, Chichester, 2014; MR3222242].

In order to engage everyone interested in making better decisions in the presence of uncertainty, Lindley writes without the citations and cluttered notation of a research paper. His straightforward, disarming style advances the thesis that subjective probability saves uncertainty from getting lost in the fog of reasoning in natural-language arguments. A particularly convincing argument is that the reader who makes decisions in conflict with the strict Bayesian viewpoint will be vulnerable to a Dutch book comprising undesirable consequences regardless of the true state of the world (5.7). The axioms needed for the underlying theorem are confidently presented as self-evident.

Like many strict Bayesians, Lindley makes no appeal to epistemological or psychological literature supporting the alignment of belief and probability. In fact, he dismisses studies indicating that actual human beliefs can deviate markedly from the requirements of strict Bayesianism, likening them to studies indicating that people make errors in arithmetic (2.5; 9.12).

The relentlessly pursued thesis is nuanced by the clarification that strict Bayesianism is not an inviolable recipe for automatic decisions but rather a box of tools that can only be used effectively when controlled by human judgment or “art” in modeling (11.7). For example, when Lindley intuitively finds that the prior distribution under his model conflicts with observations, he reframes its prior probabilities as conditional on the truth of the original model by crafting a larger model. Such ingenuity demonstrates that Bayesian probability calculations cannot shackle his actual beliefs. (This suggests that mechanically following the Dutch book argument to the point of absurdity might not discredit strict Bayesianism as decisively as thought.) Similarly, Frank Lad, called “the purest of the pure” [G. Shafer, J. Am. Stat. Assoc. 94 (1999), no. 446, 645–656 (pp. 648–649), doi:10.1080/01621459.1999.10474158] and the best-informed [D. V. Lindley, J. Royal Stat. Soc. Ser. D 49 (2000), no. 3, 293–337] of the advocates of this school, permits replacing a poorly predicting model with one that reflects “a new understanding”, an enlightenment that no algorithm can impart [F. Lad, Operational subjective statistical methods, Wiley Ser. Probab. Statist. Appl. Probab. Statist., Wiley, New York, 1996 (6.6.4); MR1421323 (98m:62009)]. Leonard Savage, a leading critic of non-Bayesian statistical methods, likewise admitted that he was “unable to formulate criteria for selecting these small worlds [in which strict Bayesianism applies] and indeed believe[d] that their selection may be a matter of judgment and experience about which it is impossible to enunciate complete and sharply defined general principles” [L. J. Savage, The foundations of statistics, Wiley, New York, 1954 (2.5); MR0063582 (16,147a)]. The Bayesian lumberjacks have evidently learned when to stop chopping and sharpen the axe. This recalls the importance of the skill of the scientist as handed down and developed within the guild of scientists and never quite articulated, let alone formalized [M. Polanyi, Personal knowledge: towards a post-critical philosophy, Univ. Chicago Press, Chicago, IL, 1962]. The explicit acknowledgement of the role of this tacit knowledge in science may serve as a warning against relying on statistical models as if they were not only useful but also right [see M. van der Laan, Amstat News 2015, no. 452, 29–30].

While the overall argument for strict Bayesianism will command the assent of many readers, some will wonder whether there are equally compelling counter-arguments that would explain why so few statisticians work under that viewpoint. That doubt will be largely offset by the considerable authority Lindley has earned as one of the preeminent developers of the statistics discipline as it is known today. His many enduring contributions to the field include two that shed light on the chasm between Bayesian and frequentist probabilities: (1) the presentation of what is known as “Lindley’s paradox” [D. V. Lindley, Biometrika 44 (1957), no. 1-2, 187–192, doi:10.1093/biomet/44.1-2.187] and (2) specifying the conditions a scalar-parameter fiducial or confidence distribution must satisfy to be a Bayesian posterior distribution [D. V. Lindley, J. Royal Stat. Soc. Ser. B 20 (1958), 102–107; MR0095550 (20 #2052)].

Treading into unresolved controversies well outside his discipline, Lindley shares his simple philosophy of science and offers his opinions on how to apply Bayesianism to law, politics, and religion. He invites his readers to share his hope that if people communicate their beliefs and interests in strict Bayesian terms, they would quarrel less (1.7; 10.7), especially if they adopt his additional advice to consider their own religious beliefs to be uncertain (1.4). Lindley even holds forth the teaching that Jesus is the Son of God as having a probability equal to each reader’s degree of belief in its truth but stops short of assessing the utilities needed to place Pascal’s Wager (1.2).

Graduate students in statistics will benefit from Lindley’s introductions to his paradox, explained in Section 14.4 to discredit frequentist hypothesis testing, and the conglomerable rule in Section 12.9. These friendly and concise introductions could effectively supplement a textbook such as [J. B. Kadane, Principles of uncertainty, Texts Statist. Sci. Ser., CRC Press, Boca Raton, FL, 2011; MR2799022 (2012g:62001)], a much more detailed appeal for strict Bayesianism.

On the other hand, simpler works such as [J. S. Hammond, R. L. Keeney and H. Raiffa, Smart choices: a practical guide to making better decisions, Harvard Bus. School Press, Boston, MA, 1999] may better serve as stand-alone guides to mundane decision making. Bridging the logical gap between decision making rules of thumb and mathematical statistics, Understanding uncertainty excels as a straightforward and sensible defense of the strict Bayesian viewpoint. Appreciating Lindley’s stance in all its theoretical simplicity and pragmatic pliability is essential for grasping both the recent history of statistics and the more complex versions of Bayesianism now used by statisticians, scientists, philosophers, and economists.

{For the original edition see [D. V. Lindley, Understanding uncertainty, Wiley, Hoboken, NJ, 2006].}

Reviewed by David R. Bickel

This review first appeared at Understanding Uncertainty (Mathematical Reviews) and is used with permission from the American Mathematical Society.

Inference after checking the prior & sampling model

1 September 2015 Leave a comment

D. R. Bickel, “Inference after checking multiple Bayesian models for data conflict and applications to mitigating the influence of rejected priors,” International Journal of Approximate Reasoning 66, 53–72 (2015). Simple explanation | Published version2014 preprint | Slides

S0888613X

The proposed procedure combines Bayesian model checking with robust Bayes acts to guide inference whether or not the model is found to be inadequate:

  1. The first stage of the procedure checks each model within a large class of models to determine which models are in conflict with the data and which are adequate for purposes of data analysis.
  2. The second stage of the procedure applies distribution combination or decision rules developed for imprecise probability.

This proposed procedure is illustrated by the application of a class of hierarchical models to a simple data set.

The link Simple explanation was added on 6 June 2017.

Maximum entropy over a set of posteriors

10 August 2015 Leave a comment

D. R. Bickel, “Blending Bayesian and frequentist methods according to the precision of prior information with applications to hypothesis testing,” Statistical Methods & Applications 24, 523-546 (2015). Published article2012 preprint | 2011 preprint | Slides | Simple explanation

SMA

This framework of statistical inference facilitates the development of new methodology to bridge the gap between the frequentist and Bayesian theories. As an example, a simple and practical method for combining p-values with a set of possible posterior probabilities is provided.

In this general approach, Bayesian inference is used when the prior distribution is known, frequentist inference is used when nothing is known about the prior, and both types of inference are blended according to game theory when the prior is known to be a member of some set. (The robust Bayes framework represents knowledge about a prior in terms of a set of possible priors.) If the benchmark posterior that corresponds to frequentist inference lies within the set of Bayesian posteriors derived from the set of priors, then the benchmark posterior is used for inference. Otherwise, the posterior within that set that minimizes the cross entropy to the benchmark posterior is used for inference.

Assessing multiple models

1 June 2014 Comments off

Bayes/non-Bayes blended inference

5 October 2012 Leave a comment

Updated with a new multiple comparison procedure and applications on 30 June 2012 and with slides for a presentation on 5 October 2012:

D. R. Bickel, “Blending Bayesian and frequentist methods according to the precision of prior information with applications to hypothesis testing,” Working Paper, University of Ottawa, deposited in uO Research at http://hdl.handle.net/10393/23124 (2012)2012 preprint | 2011 preprint | Slides

This framework of statistical inference facilitates the development of new methodology to bridge the gap between the frequentist and Bayesian theories. As an example, a simple and practical method for combining p-values with a set of possible posterior probabilities is provided.

In this new approach to statistics, Bayesian inference is used when the prior distribution is known, frequentist inference is used when nothing is known about the prior, and both types of inference are blended according to game theory when the prior is known to be a member of some set. (The robust Bayes framework represents knowledge about a prior in terms of a set of possible priors.) If the benchmark posterior that corresponds to frequentist inference lies within the set of Bayesian posteriors derived from the set of priors, then the benchmark posterior is used for inference. Otherwise, the posterior within that set that is closest to the benchmark posterior is used for inference.