Archive

Archive for the ‘simply explained’ Category

Empirical Bayes single-comparison procedure

1 July 2016 Comments off

D. R. Bickel, “Small-scale inference: Empirical Bayes and confidence methods for as few as a single comparison,” International Statistical Review 82, 457-476 (2014). Published version2011 preprint | Simple explanation (link added 21 June 2017)

Parametric empirical Bayes methods of estimating the local false discovery rate by maximum likelihood apply not only to the large-scale settings for which they were developed, but, with a simple modification, also to small numbers of comparisons. In fact, data for a single comparison are sufficient under broad conditions, as seen from applications to measurements of the abundance levels of 20 proteins and from simulation studies with confidence-based inference as the competitor.

Inference after checking the prior & sampling model

1 September 2015 Comments off

D. R. Bickel, “Inference after checking multiple Bayesian models for data conflict and applications to mitigating the influence of rejected priors,” International Journal of Approximate Reasoning 66, 53–72 (2015). Simple explanation | Published version2014 preprint | Slides

S0888613X

The proposed procedure combines Bayesian model checking with robust Bayes acts to guide inference whether or not the model is found to be inadequate:

  1. The first stage of the procedure checks each model within a large class of models to determine which models are in conflict with the data and which are adequate for purposes of data analysis.
  2. The second stage of the procedure applies distribution combination or decision rules developed for imprecise probability.

This proposed procedure is illustrated by the application of a class of hierarchical models to a simple data set.

The link Simple explanation was added on 6 June 2017.

Optimal strength of evidence

13 February 2013 Comments off

D. R. Bickel, “Minimax-optimal strength of statistical evidence for a composite alternative hypothesis,” International Statistical Review 81, 188-206 (2013). 2011 version | Simple explanation (added 2 July 2017)

cover

This publication generalizes the likelihood measure of evidential support for a hypothesis with the help of tools originally developed by information theorists for minimizing the number of letters in a message. The approach is illustrated with an application to proteomics data.

Confidence levels as degrees of belief

13 February 2013 Comments off

D. R. Bickel, “A frequentist framework of inductive reasoning,” Sankhya A 74, 141-169 (2013). published version | 2009 version cda_displayimage| relationship to a working paper | simple explanation (added 17 July 2017)

A confidence measure is a parameter distribution that encodes all confidence intervals for a given data set, model, and pivot. This article establishes some properties of the confidence measure that commend it as a viable alternative to the Bayesian posterior distribution.

Confidence (correct frequentist coverage) and coherence (compliance with Ramsey-type restrictions on rational belief) are both presented as desirable properties. The only distributions on a scalar parameter space that have both properties are confidence measures.

How to combine statistical methods

29 August 2012 1 comment

D. R. Bickel, “Game-theoretic probability combination with applications to resolving conflicts between statistical methods,” International Journal of Approximate Reasoning 53, 880-891 (2012). Full article | 2011 preprint | Slides | Simple explanation

Cover image

This paper proposes both a novel solution to the problem of combining probability distributions and a framework for using the new method to combine the results of differing statistical methods that may legitimately be used to analyze the same data set. While the paper emphasizes theoretical development, it is motivated by the need to combine two conflicting estimators of the probability of differential gene expression.