Archive

Author Archive

R functions for combining probabilities using game theory

15 November 2018 Leave a comment

Why adjust priors for the simplicity of data distributions?

1 November 2018 Leave a comment
Categories: complexity, preprints

R package for estimating local false discovery rates using empirical Bayes methods

15 October 2018 Leave a comment

Lower the statistical significance threshold to 0.005—or 0.001?

1 October 2018 Leave a comment

“The Fiducialist Papers” archived in favor of “sIBEe”

20 September 2018 Leave a comment
Categories: Fragments

Pre-data insights update priors via Bayes’s theorem

1 September 2018 Leave a comment

How to adjust statistical inferences for the simplicity of distributions

1 August 2018 Leave a comment

Should the default significance level be changed from 0.05 to 0.005?

1 July 2018 Leave a comment

My comments in this discussion of “Redefine statistical significance”:

The call for smaller significance levels cannot be based only on mathematical arguments that p values tend to be much lower than posterior probabilities, as Andrew Gelman and Christian Robert pointed out in their comment (“Revised evidence for statistical standards”).

In the rejoinder, Valen Johnson made it clear that the call is also based on empirical findings of non-reproducible research results. How many of those findings are significant at the 0.005 level? Should meta-analysis have a less stringent standard?

“Irreplicable results can’t possibly add empirical clout to the mathematical argument unless it is already known or assumed to be caused by a given cut-off, and further, that lowering it would diminish those problems.”

The preprint cites empirical results to support its use of the 1:10 prior odds. If that is in fact a reliable estimate of the prior odds for the reference class of previous studies, then, in the absence of other relevant information, it would be reasonable to use as input for Bayes’s theorem.

John Byrd asks, “Is 1:10 replicable?” Is it important to ask whether a 1:1 prior odds can be rejected at the 0.005 significance level?

END

An idealized Cromwell’s rule

1 June 2018 Leave a comment

Cromwell’s principle idealized under the theory of large deviations

Seminar, Statistics and Probability Research Group, University of Ottawa

Ottawa, Ontario

April 27, 2018

David R. Bickel

University of Ottawa

Abstract. Cromwell’s principle requires that the prior probability that one’s assumptions are incorrect is greater than 0. That is relevant to Bayesian model checking since diagnostics often reveal that prior distributions require revision, which would be impossible under Bayes’s theorem if those priors were 100% probable. The idealized Cromwell’s principle makes the probability of making incorrect assumptions arbitrarily small. Enforcing that principle under large deviations theory leads to revising Bayesian models by maximum entropy in wide generality.

An R package to transform false discovery rates to posterior probability estimates

1 May 2018 Leave a comment

There are many estimators of false discovery rate. In this package we compute the Nonlocal False Discovery Rate (NFDR) and the estimators of local false discovery rate: Corrected False discovery Rate (CFDR), Re-ranked False Discovery rate (RFDR) and the blended estimator.

Source: CRAN – Package CorrectedFDR