The call for smaller significance levels cannot be based only on mathematical arguments that p values tend to be much lower than posterior probabilities, as Andrew Gelman and Christian Robert pointed out in their comment (“Revised evidence for statistical standards”).

In the rejoinder, Valen Johnson made it clear that the call is also based on empirical findings of non-reproducible research results. How many of those findings are significant at the 0.005 level? Should meta-analysis have a less stringent standard?

…

“Irreplicable results can’t possibly add empirical clout to the mathematical argument unless it is already known or assumed to be caused by a given cut-off, and further, that lowering it would diminish those problems.”

The preprint cites empirical results to support its use of the 1:10 prior odds. If that is in fact a reliable estimate of the prior odds for the reference class of previous studies, then, in the absence of other relevant information, it would be reasonable to use as input for Bayes’s theorem.

John Byrd asks, “Is 1:10 replicable?” Is it important to ask whether a 1:1 prior odds can be rejected at the 0.005 significance level?

END

]]>Seminar, Statistics and Probability Research Group, University of Ottawa

Ottawa, Ontario

April 27, 2018

David R. Bickel

University of Ottawa

**Abstract.** Cromwell’s principle requires that the prior probability that one’s assumptions are incorrect is greater than 0. That is relevant to Bayesian model checking since diagnostics often reveal that prior distributions require revision, which would be impossible under Bayes’s theorem if those priors were 100% probable. The idealized Cromwell’s principle makes the probability of making incorrect assumptions arbitrarily small. Enforcing that principle under large deviations theory leads to revising Bayesian models by maximum entropy in wide generality.

There are many estimators of false discovery rate. In this package we compute the Nonlocal False Discovery Rate (NFDR) and the estimators of local false discovery rate: Corrected False discovery Rate (CFDR), Re-ranked False Discovery rate (RFDR) and the blended estimator.

Source: CRAN – Package CorrectedFDR

]]>D. R. Bickel, “The strength of statistical evidence for composite hypotheses: Inference to the best explanation,” *Statistica Sinica* **22**, 1147-1198 (2012). Full article | 2010 version

The special law of likelihood has many advantages over more commonly used approaches to measuring the strength of statistical evidence. However, it only can measure the support of a hypothesis that corresponds to a single distribution. The proposed general law of likelihood also can measure the extent to which the data support a hypothesis that corresponds to multiple distributions. That is accomplished by formalizing inference to the best explanation.

The general law of likelihood, as a method of inference, differs from measures of evidence that quantify changes in probability. For example, the Bayes factor is the posterior odds divided by the prior odds.

]]>Two problems confronting the eclectic approach to statistics result from its lack of a unifying theoretical foundation. First, there is typically no continuity between a p-value reported as a level of evidence for a hypothesis in the absence of the information needed to estimate a relevant prior on one hand and an estimated posterior probability of a hypothesis reported in the presence of such information on the other hand. Second, the empirical Bayes methods recommended do not propagate the uncertainty due to estimating the prior.

The latter problem is addressed by applying a coherent form of fiducial inference to hierarchical models, yielding empirical Bayes set estimates that reflect uncertainty in estimating the prior. Plugging in the maximum likelihood estimator, while not propagating that uncertainty, provides continuity from single comparisons to large numbers of comparisons.

]]>10th Workshop on Information Theoretic Methods in Science and Engineering

Paris, France

September 11, 2017

David R. Bickel

University of Ottawa

]]>the goal [of statistical inference in science] is not to infer highly probable claims (in the formal sense)* but claims which have been highly probed and have passed severe probes

Source: Deborah G. Mayo’s Performance or Probativeness? E.S. Pearson’s Statistical Philosophy | Error Statistics Philosophy

]]>