Archive
An idealized Cromwell’s rule
Cromwell’s principle idealized under the theory of large deviations
Seminar, Statistics and Probability Research Group, University of Ottawa
Ottawa, Ontario
April 27, 2018
David R. Bickel
University of Ottawa
Abstract. Cromwell’s principle requires that the prior probability that one’s assumptions are incorrect is greater than 0. That is relevant to Bayesian model checking since diagnostics often reveal that prior distributions require revision, which would be impossible under Bayes’s theorem if those priors were 100% probable. The idealized Cromwell’s principle makes the probability of making incorrect assumptions arbitrarily small. Enforcing that principle under large deviations theory leads to revising Bayesian models by maximum entropy in wide generality.
Inference after eliminating Bayesian models of excessive codelength
“The maximum-entropy and minimax redundancy distribution classes of sufficiently small codelength”
10th Workshop on Information Theoretic Methods in Science and Engineering
Paris, France
September 11, 2017
David R. Bickel
University of Ottawa
Inference after eliminating Bayesian models of insufficient evidence
“Inference under the entropy-maximizing Bayesian model of sufficient evidence”
The Third International Conference on Mathematical and Computational Medicine
Columbus, Ohio
David R. Bickel
18 May 2016
A Bayesian approach to informing decision makers
D. R. Bickel, “Reporting Bayes factors or probabilities to decision makers of unknown loss functions,” Working Paper, University of Ottawa, deposited in uO Research at http://hdl.handle.net/10393/35185 (2016). 2016 preprint | slides (“A Bayesian approach to informing decision makers: Comparisons to minimizing relative entropy,” WITMSE 2016, Helsinki)
False discovery rates are misleadingly low
D. R. Bickel, “Correcting false discovery rates for their bias toward false positives,” Working Paper, University of Ottawa, deposited in uO Research at https://goo.gl/GcUjJe (2016). 2016 preprint | Slides: CFDR and RFDR for SSC 2017 & CFDR and RFDR for ICMCM 2018
12 June 2017: URL updated and slides added
6 December 2018: more slides added
Inference after checking the prior & sampling model
D. R. Bickel, “Inference after checking multiple Bayesian models for data conflict and applications to mitigating the influence of rejected priors,” International Journal of Approximate Reasoning 66, 53–72 (2015). Simple explanation | Published version | 2014 preprint | Slides
The proposed procedure combines Bayesian model checking with robust Bayes acts to guide inference whether or not the model is found to be inadequate:
- The first stage of the procedure checks each model within a large class of models to determine which models are in conflict with the data and which are adequate for purposes of data analysis.
- The second stage of the procedure applies distribution combination or decision rules developed for imprecise probability.
This proposed procedure is illustrated by the application of a class of hierarchical models to a simple data set.
The link Simple explanation was added on 6 June 2017.
Maximum entropy over a set of posteriors
D. R. Bickel, “Blending Bayesian and frequentist methods according to the precision of prior information with applications to hypothesis testing,” Statistical Methods & Applications 24, 523-546 (2015). Published article | 2012 preprint | 2011 preprint | Slides | Simple explanation
This framework of statistical inference facilitates the development of new methodology to bridge the gap between the frequentist and Bayesian theories. As an example, a simple and practical method for combining p-values with a set of possible posterior probabilities is provided.
In this general approach, Bayesian inference is used when the prior distribution is known, frequentist inference is used when nothing is known about the prior, and both types of inference are blended according to game theory when the prior is known to be a member of some set. (The robust Bayes framework represents knowledge about a prior in terms of a set of possible priors.) If the benchmark posterior that corresponds to frequentist inference lies within the set of Bayesian posteriors derived from the set of priors, then the benchmark posterior is used for inference. Otherwise, the posterior within that set that minimizes the cross entropy to the benchmark posterior is used for inference.
Assessing multiple models
D. R. Bickel, “Inference after checking multiple Bayesian models for data conflict,” Working Paper, University of Ottawa, deposited in uO Research at http://hdl.handle.net/1039/31135 (2014). 2014 preprint | Slides
MLE of the local FDR
Y. Yang, F. A. Aghababazadeh, and D. R. Bickel, “Parametric estimation of the local false discovery rate for identifying genetic associations,” IEEE/ACM Transactions on Computational Biology and Bioinformatics 10, 98-108 (2013). 2010 version | Slides
News
My TweetsArchives by topic
Archives by date
Slideshow
This slideshow requires JavaScript.
You must be logged in to post a comment.