## Maximum entropy over a set of posteriors

D. R. Bickel, “Blending Bayesian and frequentist methods according to the precision of prior information with applications to hypothesis testing,” *Statistical Methods & Applications* **24**, 523-546 (2015). Published article | 2012 preprint | 2011 preprint | Slides | Simple explanation

This framework of statistical inference facilitates the development of new methodology to bridge the gap between the frequentist and Bayesian theories. As an example, a simple and practical method for combining *p*-values with a set of possible posterior probabilities is provided.

In this general approach, Bayesian inference is used when the prior distribution is known, frequentist inference is used when nothing is known about the prior, and both types of inference are blended according to game theory when the prior is known to be a member of some set. (The robust Bayes framework represents knowledge about a prior in terms of a set of possible priors.) If the benchmark posterior that corresponds to frequentist inference lies within the set of Bayesian posteriors derived from the set of priors, then the benchmark posterior is used for inference. Otherwise, the posterior within that set that minimizes the cross entropy to the benchmark posterior is used for inference.

### Share this:

*Related*

### Follow me on Twitter

My Tweets### New content

- R package for estimating local false discovery rates using empirical Bayes methods
- Lower the statistical significance threshold to 0.005—or 0.001?
- “The Fiducialist Papers” archived in favor of “sIBEe”
- Pre-data insights update priors via Bayes’s theorem
- How to adjust statistical inferences for the simplicity of distributions

### More content

### Archives

### Emphases

### Slideshow

This slideshow requires JavaScript.