Archive for the ‘Methods’ Category

Should the default significance level be changed from 0.05 to 0.005?

1 July 2018 Comments off

My comments in this discussion of “Redefine statistical significance”:

The call for smaller significance levels cannot be based only on mathematical arguments that p values tend to be much lower than posterior probabilities, as Andrew Gelman and Christian Robert pointed out in their comment (“Revised evidence for statistical standards”).

In the rejoinder, Valen Johnson made it clear that the call is also based on empirical findings of non-reproducible research results. How many of those findings are significant at the 0.005 level? Should meta-analysis have a less stringent standard?

“Irreplicable results can’t possibly add empirical clout to the mathematical argument unless it is already known or assumed to be caused by a given cut-off, and further, that lowering it would diminish those problems.”

The preprint cites empirical results to support its use of the 1:10 prior odds. If that is in fact a reliable estimate of the prior odds for the reference class of previous studies, then, in the absence of other relevant information, it would be reasonable to use as input for Bayes’s theorem.

John Byrd asks, “Is 1:10 replicable?” Is it important to ask whether a 1:1 prior odds can be rejected at the 0.005 significance level?


An idealized Cromwell’s principle

1 June 2018 Comments off

Cromwell’s principle idealized under the theory of large deviations

Seminar, Statistics and Probability Research Group, University of Ottawa

Ottawa, Ontario

April 27, 2018

David R. Bickel

University of Ottawa

Abstract. Cromwell’s principle requires that the prior probability that one’s assumptions are incorrect is greater than 0. That is relevant to Bayesian model checking since diagnostics often reveal that prior distributions require revision, which would be impossible under Bayes’s theorem if those priors were 100% probable. The idealized Cromwell’s principle makes the probability of making incorrect assumptions arbitrarily small. Enforcing that principle under large deviations theory leads to revising Bayesian models by maximum entropy in wide generality.

An R package to transform false discovery rates to posterior probability estimates

1 May 2018 Comments off

There are many estimators of false discovery rate. In this package we compute the Nonlocal False Discovery Rate (NFDR) and the estimators of local false discovery rate: Corrected False discovery Rate (CFDR), Re-ranked False Discovery rate (RFDR) and the blended estimator.

Source: CRAN – Package CorrectedFDR

LFDR.MLE-package function | R Documentation

1 March 2018 Comments off

Suite of R functions for the estimation of the local false discovery rate (LFDR) using Type II maximum likelihood estimation (MLE):

LFDR.MLE-package function | R Documentation

Categories: empirical Bayes, software

Inference to the best explanation of the evidence

1 February 2018 Comments off

The p value and Bayesian methods have well known drawbacks when it comes to measuring the strength of the evidence supporting one hypothesis over another. To overcome those drawbacks, this paper proposes an alternative method of quantifying how much support a hypothesis has from evidence consisting of data.


D. R. Bickel, “The strength of statistical evidence for composite hypotheses: Inference to the best explanation,” Statistica Sinica 22, 1147-1198 (2012). Full article2010 version

The special law of likelihood has many advantages over more commonly used approaches to measuring the strength of statistical evidence. However, it only can measure the support of a hypothesis that corresponds to a single distribution. The proposed general law of likelihood also can measure the extent to which the data support a hypothesis that corresponds to multiple distributions. That is accomplished by formalizing inference to the best explanation.

Read more…

How to make decisions using somewhat reliable posterior distributions

15 January 2018 Comments off
Categories: model checking, preprints

Uncertainty propagation for empirical Bayes interval estimates: A fiducial approach

1 December 2017 Comments off

D. R. Bickel, “Confidence distributions applied to propagating uncertainty to inference based on estimating the local false discovery rate: A fiducial continuum from confidence sets to empirical Bayes set estimates as the number of comparisons increases,” Communications in Statistics – Theory and Methods 46, 10788-10799 (2017). Published article | Free access (limited time)2014 preprint

Publication Cover

Two problems confronting the eclectic approach to statistics result from its lack of a unifying theoretical foundation. First, there is typically no continuity between a p-value reported as a level of evidence for a hypothesis in the absence of the information needed to estimate a relevant prior on one hand and an estimated posterior probability of a hypothesis reported in the presence of such information on the other hand. Second, the empirical Bayes methods recommended do not propagate the uncertainty due to estimating the prior.

The latter problem is addressed by applying a coherent form of fiducial inference to hierarchical models, yielding empirical Bayes set estimates that reflect uncertainty in estimating the prior. Plugging in the maximum likelihood estimator, while not propagating that uncertainty, provides continuity from single comparisons to large numbers of comparisons.