Archive
Recent preprints by David Bickel
Evidential unification of confidence and empirical Bayes methods
D. R. Bickel, “Confidence distributions and empirical Bayes posterior distributions unified as distributions of evidential support,” Working Paper, DOI: 10.5281/zenodo.2529438, http://doi.org/10.5281/zenodo.2529438 (2018). 2018 preprint
How to choose features or p values for empirical Bayes estimation of the local false discovery rate
F. Abbas-Aghababazadeh, M. Alvo, and D. R. Bickel, “Estimating the local false discovery rate via a bootstrap solution to the reference class problem,” PLoS ONE 13, e0206902 (2018) | full text | 2016 preprint
R package for estimating local false discovery rates using empirical Bayes methods
LFDREmpiricalBayes — Estimating Local False Discovery Rates Using Empirical Bayes Methods
Lower the statistical significance threshold to 0.005—or 0.001?
D. R. Bickel, “Sharpen statistical significance: Evidence thresholds and Bayes factors sharpened into Occam’s razors,” Working Paper, University of Ottawa, <hal-01851322> https://hal.archives-ouvertes.fr/hal-01851322 (2018). 2018 preprint
Should the default significance level be changed from 0.05 to 0.005?
My comments in this discussion of “Redefine statistical significance”:
The call for smaller significance levels cannot be based only on mathematical arguments that p values tend to be much lower than posterior probabilities, as Andrew Gelman and Christian Robert pointed out in their comment (“Revised evidence for statistical standards”).
In the rejoinder, Valen Johnson made it clear that the call is also based on empirical findings of non-reproducible research results. How many of those findings are significant at the 0.005 level? Should meta-analysis have a less stringent standard?
…
“Irreplicable results can’t possibly add empirical clout to the mathematical argument unless it is already known or assumed to be caused by a given cut-off, and further, that lowering it would diminish those problems.”
The preprint cites empirical results to support its use of the 1:10 prior odds. If that is in fact a reliable estimate of the prior odds for the reference class of previous studies, then, in the absence of other relevant information, it would be reasonable to use as input for Bayes’s theorem.
John Byrd asks, “Is 1:10 replicable?” Is it important to ask whether a 1:1 prior odds can be rejected at the 0.005 significance level?
END
An R package to transform false discovery rates to posterior probability estimates
There are many estimators of false discovery rate. In this package we compute the Nonlocal False Discovery Rate (NFDR) and the estimators of local false discovery rate: Corrected False discovery Rate (CFDR), Re-ranked False Discovery rate (RFDR) and the blended estimator.
Source: CRAN – Package CorrectedFDR
LFDR.MLE-package function | R Documentation
Suite of R functions for the estimation of the local false discovery rate (LFDR) using Type II maximum likelihood estimation (MLE):
Inference to the best explanation of the evidence
The p value and Bayesian methods have well known drawbacks when it comes to measuring the strength of the evidence supporting one hypothesis over another. To overcome those drawbacks, this paper proposes an alternative method of quantifying how much support a hypothesis has from evidence consisting of data.
D. R. Bickel, “The strength of statistical evidence for composite hypotheses: Inference to the best explanation,” Statistica Sinica 22, 1147-1198 (2012). Full article | 2010 version
The special law of likelihood has many advantages over more commonly used approaches to measuring the strength of statistical evidence. However, it only can measure the support of a hypothesis that corresponds to a single distribution. The proposed general law of likelihood also can measure the extent to which the data support a hypothesis that corresponds to multiple distributions. That is accomplished by formalizing inference to the best explanation.
You must be logged in to post a comment.