Archive
Confidence levels as degrees of belief
D. R. Bickel, “A frequentist framework of inductive reasoning,” Sankhya A 74, 141-169 (2013). published version | 2009 version
| relationship to a working paper | simple explanation (added 17 July 2017)
A confidence measure is a parameter distribution that encodes all confidence intervals for a given data set, model, and pivot. This article establishes some properties of the confidence measure that commend it as a viable alternative to the Bayesian posterior distribution.
Confidence (correct frequentist coverage) and coherence (compliance with Ramsey-type restrictions on rational belief) are both presented as desirable properties. The only distributions on a scalar parameter space that have both properties are confidence measures.
Local FDR estimation for low-dimensional data
M. Padilla and D. R. Bickel, “Estimators of the local false discovery rate designed for small numbers of tests,” Statistical Applications in Genetics and Molecular Biology 11 (5), art. 4 (2012). Full article | 2010 & 2012 preprints
Confidence + coherence = fiducial shrinkage
D. R. Bickel, “A prior-free framework of coherent inference and its derivation of simple shrinkage estimators,” Working Paper, University of Ottawa, deposited in uO Research at http://hdl.handle.net/10393/23093 (2012). 2012 preprint
This paper proposes a new method of shrinking point and interval estimates on the basis of fiducial inference. Since problems with the interpretation of fiducial probability have prevented its widespread use, this manuscript first places fiducial inference within a general framework that has Bayesian and frequentist foundations.
Confidence-based decision theory
D. R. Bickel, “Coherent frequentism: A decision theory based on confidence sets,” Communications in Statistics – Theory and Methods 41, 1478-1496 (2012). Full article (open access) | 2009 version | Simple explanation (link added 27 June 2018)
To combine the self-consistency of Bayesian statistics with the objectivity of frequentist statistics, this paper formulates a framework of inference for developing novel statistical methods. The framework is based on a confidence posterior, a parameter probability distribution that does not require any prior distribution. While the Bayesian posterior is defined in terms of a conditional distribution given the observed data, the confidence posterior is instead defined such that the probability that the parameter value lies in any fixed subset of parameter space, given the observed data, is equal to the coverage rate of the corresponding confidence interval. Inferences based on the confidence posterior are reliable in the sense that the certainty level of a composite hypothesis is a weakly consistent estimate of the 0-1 indicator of hypothesis truth. At the same time, the confidence posterior is as non-contradictory as the Bayesian posterior since both satisfy the same coherence axioms. Using the theory of coherent upper and lower probabilities, the confidence posterior is generalized for situations in which no approximate or exact confidence set is available. Examples of hypothesis testing and estimation illustrate the range of applications of the proposed framework.
Additional summaries appear in the abstract and in Section 1.3 of the paper.
How to use priors with caution
D. R. Bickel, “Controlling the degree of caution in statistical inference with the Bayesian and frequentist approaches as opposite extremes,” Electronic Journal of Statistics 6, 686-709 (2012). Full text (open access) | 2011 preprint

This paper reports a novel probability-interval framework for combining strengths of frequentist and Bayesian methods on the basis of game-theoretic first principles. It enables data analysis on the basis of the posterior distribution that is a blend between a set of plausible Bayesian posterior distributions and a parameter distribution that represents an alternative method of data analysis. This paper’s framework of statistical inference is intended to facilitate the development of new methods to bridge the gap between the frequentist and Bayesian approaches. Four concrete examples illustrate how such intermediate methods can leverage strengths of the two extreme approaches.
Effect-size estimates from hypothesis probabilities
D. R. Bickel, “Empirical Bayes interval estimates that are conditionally equal to unadjusted confidence intervals or to default prior credibility intervals,” Statistical Applications in Genetics and Molecular Biology 11 (3), art. 7 (2012). Full article | 2010 preprint
![]()
The method contributed in this paper adjusts confidence intervals in multiple-comparison problems according to the estimated local false discovery rate. This shrinkage method performs substantially better than standard confidence intervals under the independence of the data across comparisons. A special case of the confidence intervals is the posterior median, which provides an improved method of ranking biological features such as genes, proteins, or genetic variants. The resulting ranks of features lead to better prioritization of which features to investigate further.
Combining inferences from different methods
D. R. Bickel, “Resolving conflicts between statistical methods by probability combination: Application to empirical Bayes analyses of genomic data,” Technical Report, Ottawa Institute of Systems Biology, arXiv:1111.6174 (2011). Full preprint
This paper proposes a solution to the problem of combining the results of differing statistical methods that may legitimately be used to analyze the same data set. The motivating application is the combination of two estimators of the probability of differential gene expression: one uses an empirical null distribution, and the other uses the theoretical null distribution. Since there is usually not any reliable way to predict which null distribution will perform better for a given data set and since the choice between them often has a large impact on the conclusions, the proposed hedging strategy addresses a pressing need in statistical genomics. Many other applications are also mentioned in the abstract and described in the introduction.
Degree of caution in inference
D. R. Bickel, “Controlling the degree of caution in statistical inference with the Bayesian and frequentist approaches as opposite extremes,” Technical Report, Ottawa Institute of Systems Biology, arXiv:1109.5278 (2011). Full preprint
This paper’s framework of statistical inference is intended to facilitate the development of new methods to bridge the gap between the frequentist and Bayesian approaches. Three concrete examples illustrate how such intermediate methods can leverage strengths of the two extreme approaches.


You must be logged in to post a comment.