## Pre-data insights update priors via Bayes’s theorem

D. R. Bickel, “Bayesian revision of a prior given prior-data conflict, expert opinion, or a similar insight: A large-deviation approach,” *Statistics* **52**, 552-570 (2018). Full text | 2015 preprint | Simple explanation

## How to adjust statistical inferences for the simplicity of distributions

D. R. Bickel, “Confidence intervals, significance values, maximum likelihood estimates, etc. sharpened into Occam’s razors,” Working Paper, University of Ottawa, <hal-01799519>** **https://hal.archives-ouvertes.fr/hal-01799519 (2018). 2018 preprint | Slides

## Should the default significance level be changed from 0.05 to 0.005?

My comments in this discussion of “Redefine statistical significance”:

The call for smaller significance levels cannot be based only on mathematical arguments that p values tend to be much lower than posterior probabilities, as Andrew Gelman and Christian Robert pointed out in their comment (“Revised evidence for statistical standards”).

In the rejoinder, Valen Johnson made it clear that the call is also based on empirical findings of non-reproducible research results. How many of those findings are significant at the 0.005 level? Should meta-analysis have a less stringent standard?

…

“Irreplicable results can’t possibly add empirical clout to the mathematical argument unless it is already known or assumed to be caused by a given cut-off, and further, that lowering it would diminish those problems.”

The preprint cites empirical results to support its use of the 1:10 prior odds. If that is in fact a reliable estimate of the prior odds for the reference class of previous studies, then, in the absence of other relevant information, it would be reasonable to use as input for Bayes’s theorem.

John Byrd asks, “Is 1:10 replicable?” Is it important to ask whether a 1:1 prior odds can be rejected at the 0.005 significance level?

END

## An R package to transform false discovery rates to posterior probability estimates

There are many estimators of false discovery rate. In this package we compute the Nonlocal False Discovery Rate (NFDR) and the estimators of local false discovery rate: Corrected False discovery Rate (CFDR), Re-ranked False Discovery rate (RFDR) and the blended estimator.

Source: CRAN – Package CorrectedFDR

## LFDR.MLE-package function | R Documentation

Suite of R functions for the estimation of the local false discovery rate (LFDR) using Type II maximum likelihood estimation (MLE):

## Inference to the best explanation of the evidence

The *p* value and Bayesian methods have well known drawbacks when it comes to measuring the strength of the evidence supporting one hypothesis over another. To overcome those drawbacks, this paper proposes an alternative method of quantifying how much support a hypothesis has from evidence consisting of data.

D. R. Bickel, “The strength of statistical evidence for composite hypotheses: Inference to the best explanation,” *Statistica Sinica* **22**, 1147-1198 (2012). Full article | 2010 version

The special law of likelihood has many advantages over more commonly used approaches to measuring the strength of statistical evidence. However, it only can measure the support of a hypothesis that corresponds to a single distribution. The proposed general law of likelihood also can measure the extent to which the data support a hypothesis that corresponds to multiple distributions. That is accomplished by formalizing inference to the best explanation.

## How to make decisions using somewhat reliable posterior distributions

D. R. Bickel, “Departing from Bayesian inference toward minimaxity to the extent that the posterior distribution is unreliable,” Working Paper, University of Ottawa, <hal-01673783>** **https://hal.archives-ouvertes.fr/hal-01673783 (2017). 2017 preprint

## Do models have probabilities or just possibilities?

Andrew says: David:I don’t think it makes sense to talk of the probability of a model. See this paper with Shalizi for much discussion of this point.

David Bickel says: If models do not have probabilities, perhaps they have possibilities in the sense of possibility theory. For example, the possibility of a model might be a function of its adequacy according to a model checking procedure: Appendix B of https://goo.gl/5s7bS3

## Uncertainty propagation for empirical Bayes interval estimates: A fiducial approach

D. R. Bickel, “Confidence distributions applied to propagating uncertainty to inference based on estimating the local false discovery rate: A fiducial continuum from confidence sets to empirical Bayes set estimates as the number of comparisons increases,” *Communications in Statistics – Theory and Methods* **46**, 10788-10799 (2017). Published article | Free access (limited time) | 2014 preprint

Two problems confronting the eclectic approach to statistics result from its lack of a unifying theoretical foundation. First, there is typically no continuity between a p-value reported as a level of evidence for a hypothesis in the absence of the information needed to estimate a relevant prior on one hand and an estimated posterior probability of a hypothesis reported in the presence of such information on the other hand. Second, the empirical Bayes methods recommended do not propagate the uncertainty due to estimating the prior.

The latter problem is addressed by applying a coherent form of fiducial inference to hierarchical models, yielding empirical Bayes set estimates that reflect uncertainty in estimating the prior. Plugging in the maximum likelihood estimator, while not propagating that uncertainty, provides continuity from single comparisons to large numbers of comparisons.