## Confidence-based decision theory

D. R. Bickel, “Coherent frequentism: A decision theory based on confidence sets,” *Communications in Statistics – Theory and Methods* **41**, 1478-1496 (2012). Full article (open access) | 2009 version | Simple explanation (link added 27 June 2018)

To combine the self-consistency of Bayesian statistics with the objectivity of frequentist statistics, this paper formulates a framework of inference for developing novel statistical methods. The framework is based on a confidence posterior, a parameter probability distribution that does not require any prior distribution. While the Bayesian posterior is defined in terms of a conditional distribution given the observed data, the confidence posterior is instead defined such that the probability that the parameter value lies in any fixed subset of parameter space, given the observed data, is equal to the coverage rate of the corresponding confidence interval. Inferences based on the confidence posterior are reliable in the sense that the certainty level of a composite hypothesis is a weakly consistent estimate of the 0-1 indicator of hypothesis truth. At the same time, the confidence posterior is as non-contradictory as the Bayesian posterior since both satisfy the same coherence axioms. Using the theory of coherent upper and lower probabilities, the confidence posterior is generalized for situations in which no approximate or exact confidence set is available. Examples of hypothesis testing and estimation illustrate the range of applications of the proposed framework.

Additional summaries appear in the abstract and in Section 1.3 of the paper.

## How to use priors with caution

D. R. Bickel, “Controlling the degree of caution in statistical inference with the Bayesian and frequentist approaches as opposite extremes,” *Electronic Journal of Statistics* **6**, 686-709 (2012). Full text (open access) | 2011 preprint

This paper reports a novel probability-interval framework for combining strengths of frequentist and Bayesian methods on the basis of game-theoretic first principles. It enables data analysis on the basis of the posterior distribution that is a blend between a set of plausible Bayesian posterior distributions and a parameter distribution that represents an alternative method of data analysis. This paper’s framework of statistical inference is intended to facilitate the development of new methods to bridge the gap between the frequentist and Bayesian approaches. Four concrete examples illustrate how such intermediate methods can leverage strengths of the two extreme approaches.

## Combining inferences from different methods

D. R. Bickel, “Resolving conflicts between statistical methods by probability combination: Application to empirical Bayes analyses of genomic data,” Technical Report, Ottawa Institute of Systems Biology, arXiv:1111.6174 (2011). Full preprint

This paper proposes a solution to the problem of combining the results of differing statistical methods that may legitimately be used to analyze the same data set. The motivating application is the combination of two estimators of the probability of differential gene expression: one uses an empirical null distribution, and the other uses the theoretical null distribution. Since there is usually not any reliable way to predict which null distribution will perform better for a given data set and since the choice between them often has a large impact on the conclusions, the proposed hedging strategy addresses a pressing need in statistical genomics. Many other applications are also mentioned in the abstract and described in the introduction.

You must be logged in to post a comment.