Archive

Archive for the ‘fiducial inference’ Category

Confidence levels as degrees of belief

13 February 2013 Leave a comment

D. R. Bickel, “A frequentist framework of inductive reasoning,” Sankhya A 74, 141-169 (2013). published version | 2009 version cda_displayimage| relationship to a working paper | simple explanation (added 17 July 2017)

A confidence measure is a parameter distribution that encodes all confidence intervals for a given data set, model, and pivot. This article establishes some properties of the confidence measure that commend it as a viable alternative to the Bayesian posterior distribution.

Confidence (correct frequentist coverage) and coherence (compliance with Ramsey-type restrictions on rational belief) are both presented as desirable properties. The only distributions on a scalar parameter space that have both properties are confidence measures.

Local FDR estimation for low-dimensional data

18 October 2012 Leave a comment

M. Padilla and D. R. Bickel, “Estimators of the local false discovery rate designed for small numbers of tests,” Statistical Applications in Genetics and Molecular Biology 11 (5), art. 4 (2012). Full article | 2010 & 2012 preprints

image

This article describes estimators of local false discovery rates, compares their biases for small-scale inference, and illustrates the methods using a quantitative proteomics data set. In addition, theoretical results are presented in the appendices.

Bayes/non-Bayes blended inference

5 October 2012 Leave a comment

Updated with a new multiple comparison procedure and applications on 30 June 2012 and with slides for a presentation on 5 October 2012:

D. R. Bickel, “Blending Bayesian and frequentist methods according to the precision of prior information with applications to hypothesis testing,” Working Paper, University of Ottawa, deposited in uO Research at http://hdl.handle.net/10393/23124 (2012)2012 preprint | 2011 preprint | Slides

This framework of statistical inference facilitates the development of new methodology to bridge the gap between the frequentist and Bayesian theories. As an example, a simple and practical method for combining p-values with a set of possible posterior probabilities is provided.

In this new approach to statistics, Bayesian inference is used when the prior distribution is known, frequentist inference is used when nothing is known about the prior, and both types of inference are blended according to game theory when the prior is known to be a member of some set. (The robust Bayes framework represents knowledge about a prior in terms of a set of possible priors.) If the benchmark posterior that corresponds to frequentist inference lies within the set of Bayesian posteriors derived from the set of priors, then the benchmark posterior is used for inference. Otherwise, the posterior within that set that is closest to the benchmark posterior is used for inference.

How to combine statistical methods

29 August 2012 1 comment

D. R. Bickel, “Game-theoretic probability combination with applications to resolving conflicts between statistical methods,” International Journal of Approximate Reasoning 53, 880-891 (2012). Full article | 2011 preprint | Slides | Simple explanation

Cover image

This paper proposes both a novel solution to the problem of combining probability distributions and a framework for using the new method to combine the results of differing statistical methods that may legitimately be used to analyze the same data set. While the paper emphasizes theoretical development, it is motivated by the need to combine two conflicting estimators of the probability of differential gene expression.

Confidence + coherence = fiducial shrinkage

30 June 2012 Leave a comment

D. R. Bickel, “A prior-free framework of coherent inference and its derivation of simple shrinkage estimators,” Working Paper, University of Ottawa, deposited in uO Research at http://hdl.handle.net/10393/23093 (2012). 2012 preprint

This paper proposes a new method of shrinking point and interval estimates on the basis of fiducial inference. Since problems with the interpretation of fiducial probability have prevented its widespread use, this manuscript first places fiducial inference within a general framework that has Bayesian and frequentist foundations.

Confidence-based decision theory

1 May 2012 Leave a comment

D. R. Bickel, “Coherent frequentism: A decision theory based on confidence sets,” Communications in Statistics – Theory and Methods 41, 1478-1496 (2012). Full article (open access) | 2009 version | Simple explanation (link added 27 June 2018)

image

To combine the self-consistency of Bayesian statistics with the objectivity of frequentist statistics, this paper formulates a framework of inference for developing novel statistical methods. The framework is based on a confidence posterior, a parameter probability distribution that does not require any prior distribution. While the Bayesian posterior is defined in terms of a conditional distribution given the observed data, the confidence posterior is instead defined such that the probability that the parameter value lies in any fixed subset of parameter space, given the observed data, is equal to the coverage rate of the corresponding confidence interval. Inferences based on the confidence posterior are reliable in the sense that the certainty level of a composite hypothesis is a weakly consistent estimate of the 0-1 indicator of hypothesis truth. At the same time, the confidence posterior is as non-contradictory as the Bayesian posterior since both satisfy the same coherence axioms. Using the theory of coherent upper and lower probabilities, the confidence posterior is generalized for situations in which no approximate or exact confidence set is available. Examples of hypothesis testing and estimation illustrate the range of applications of the proposed framework.

Additional summaries appear in the abstract and in Section 1.3 of the paper.

How to use priors with caution

13 April 2012 Leave a comment

D. R. Bickel, “Controlling the degree of caution in statistical inference with the Bayesian and frequentist approaches as opposite extremes,” Electronic Journal of Statistics 6, 686-709 (2012). Full text (open access) | 2011 preprint

Electronic Journal of Statistics

This paper reports a novel probability-interval framework for combining strengths of frequentist and Bayesian methods on the basis of game-theoretic first principles. It enables data analysis on the basis of the posterior distribution that is a blend between a set of plausible Bayesian posterior distributions and a parameter distribution that represents an alternative method of data analysis. This paper’s framework of statistical inference is intended to facilitate the development of new methods to bridge the gap between the frequentist and Bayesian approaches. Four concrete examples illustrate how such intermediate methods can leverage strengths of the two extreme approaches.

Effect-size estimates from hypothesis probabilities

25 February 2012 Leave a comment

D. R. Bickel, “Empirical Bayes interval estimates that are conditionally equal to unadjusted confidence intervals or to default prior credibility intervals,” Statistical Applications in Genetics and Molecular Biology 11 (3), art. 7 (2012). Full article | 2010 preprint

image
The method contributed in this paper adjusts confidence intervals in multiple-comparison problems according to the estimated local false discovery rate. This shrinkage method performs substantially better than standard confidence intervals under the independence of the data across comparisons. A special case of the confidence intervals is the posterior median, which provides an improved method of ranking biological features such as genes, proteins, or genetic variants. The resulting ranks of features lead to better prioritization of which features to investigate further.

Combining inferences from different methods

28 November 2011 Leave a comment

D. R. Bickel, “Resolving conflicts between statistical methods by probability combination: Application to empirical Bayes analyses of genomic data,” Technical Report, Ottawa Institute of Systems Biology, arXiv:1111.6174 (2011). Full preprint

This paper proposes a solution to the problem of combining the results of differing statistical methods that may legitimately be used to analyze the same data set. The motivating application is the combination of two estimators of the probability of differential gene expression: one uses an empirical null distribution, and the other uses the theoretical null distribution. Since there is usually not any reliable way to predict which null distribution will perform better for a given data set and since the choice between them often has a large impact on the conclusions, the proposed hedging strategy addresses a pressing need in statistical genomics. Many other applications are also mentioned in the abstract and described in the introduction.

Degree of caution in inference

26 September 2011 Leave a comment

D. R. Bickel, “Controlling the degree of caution in statistical inference with the Bayesian and frequentist approaches as opposite extremes,” Technical Report, Ottawa Institute of Systems Biology, arXiv:1109.5278 (2011). Full preprint

This paper’s framework of statistical inference is intended to facilitate the development of new methods to bridge the gap between the frequentist and Bayesian approaches. Three concrete examples illustrate how such intermediate methods can leverage strengths of the two extreme approaches.