Archive

Archive for the ‘maximum entropy’ Category

Coherent inference after checking a prior

7 January 2016 Leave a comment

Inference after checking the prior & sampling model

1 September 2015 Leave a comment

D. R. Bickel, “Inference after checking multiple Bayesian models for data conflict and applications to mitigating the influence of rejected priors,” International Journal of Approximate Reasoning 66, 53–72 (2015). Simple explanation | Published version2014 preprint | Slides

S0888613X

The proposed procedure combines Bayesian model checking with robust Bayes acts to guide inference whether or not the model is found to be inadequate:

  1. The first stage of the procedure checks each model within a large class of models to determine which models are in conflict with the data and which are adequate for purposes of data analysis.
  2. The second stage of the procedure applies distribution combination or decision rules developed for imprecise probability.

This proposed procedure is illustrated by the application of a class of hierarchical models to a simple data set.

The link Simple explanation was added on 6 June 2017.

Maximum entropy over a set of posteriors

10 August 2015 Leave a comment

D. R. Bickel, “Blending Bayesian and frequentist methods according to the precision of prior information with applications to hypothesis testing,” Statistical Methods & Applications 24, 523-546 (2015). Published article2012 preprint | 2011 preprint | Slides | Simple explanation

SMA

This framework of statistical inference facilitates the development of new methodology to bridge the gap between the frequentist and Bayesian theories. As an example, a simple and practical method for combining p-values with a set of possible posterior probabilities is provided.

In this general approach, Bayesian inference is used when the prior distribution is known, frequentist inference is used when nothing is known about the prior, and both types of inference are blended according to game theory when the prior is known to be a member of some set. (The robust Bayes framework represents knowledge about a prior in terms of a set of possible priors.) If the benchmark posterior that corresponds to frequentist inference lies within the set of Bayesian posteriors derived from the set of priors, then the benchmark posterior is used for inference. Otherwise, the posterior within that set that minimizes the cross entropy to the benchmark posterior is used for inference.

Assessing multiple models

1 June 2014 Comments off

Bayes/non-Bayes blended inference

5 October 2012 Leave a comment

Updated with a new multiple comparison procedure and applications on 30 June 2012 and with slides for a presentation on 5 October 2012:

D. R. Bickel, “Blending Bayesian and frequentist methods according to the precision of prior information with applications to hypothesis testing,” Working Paper, University of Ottawa, deposited in uO Research at http://hdl.handle.net/10393/23124 (2012)2012 preprint | 2011 preprint | Slides

This framework of statistical inference facilitates the development of new methodology to bridge the gap between the frequentist and Bayesian theories. As an example, a simple and practical method for combining p-values with a set of possible posterior probabilities is provided.

In this new approach to statistics, Bayesian inference is used when the prior distribution is known, frequentist inference is used when nothing is known about the prior, and both types of inference are blended according to game theory when the prior is known to be a member of some set. (The robust Bayes framework represents knowledge about a prior in terms of a set of possible priors.) If the benchmark posterior that corresponds to frequentist inference lies within the set of Bayesian posteriors derived from the set of priors, then the benchmark posterior is used for inference. Otherwise, the posterior within that set that is closest to the benchmark posterior is used for inference.

How to use priors with caution

13 April 2012 Leave a comment

D. R. Bickel, “Controlling the degree of caution in statistical inference with the Bayesian and frequentist approaches as opposite extremes,” Electronic Journal of Statistics 6, 686-709 (2012). Full text (open access) | 2011 preprint

Electronic Journal of Statistics

This paper reports a novel probability-interval framework for combining strengths of frequentist and Bayesian methods on the basis of game-theoretic first principles. It enables data analysis on the basis of the posterior distribution that is a blend between a set of plausible Bayesian posterior distributions and a parameter distribution that represents an alternative method of data analysis. This paper’s framework of statistical inference is intended to facilitate the development of new methods to bridge the gap between the frequentist and Bayesian approaches. Four concrete examples illustrate how such intermediate methods can leverage strengths of the two extreme approaches.

Degree of caution in inference

26 September 2011 Leave a comment

D. R. Bickel, “Controlling the degree of caution in statistical inference with the Bayesian and frequentist approaches as opposite extremes,” Technical Report, Ottawa Institute of Systems Biology, arXiv:1109.5278 (2011). Full preprint

This paper’s framework of statistical inference is intended to facilitate the development of new methods to bridge the gap between the frequentist and Bayesian approaches. Three concrete examples illustrate how such intermediate methods can leverage strengths of the two extreme approaches.