Archive
R package for estimating local false discovery rates using empirical Bayes methods
LFDREmpiricalBayes — Estimating Local False Discovery Rates Using Empirical Bayes Methods
Lower the statistical significance threshold to 0.005—or 0.001?
D. R. Bickel, “Sharpen statistical significance: Evidence thresholds and Bayes factors sharpened into Occam’s razors,” Working Paper, University of Ottawa, <hal-01851322> https://hal.archives-ouvertes.fr/hal-01851322 (2018). 2018 preprint
Pre-data insights update priors via Bayes’s theorem
D. R. Bickel, “Bayesian revision of a prior given prior-data conflict, expert opinion, or a similar insight: A large-deviation approach,” Statistics 52, 552-570 (2018). Full text | 2015 preprint | Simple explanation
How to adjust statistical inferences for the simplicity of distributions
D. R. Bickel, “Confidence intervals, significance values, maximum likelihood estimates, etc. sharpened into Occam’s razors,” Working Paper, University of Ottawa, <hal-01799519> https://hal.archives-ouvertes.fr/hal-01799519 (2018). 2018 preprint | Slides
An R package to transform false discovery rates to posterior probability estimates
There are many estimators of false discovery rate. In this package we compute the Nonlocal False Discovery Rate (NFDR) and the estimators of local false discovery rate: Corrected False discovery Rate (CFDR), Re-ranked False Discovery rate (RFDR) and the blended estimator.
Source: CRAN – Package CorrectedFDR
LFDR.MLE-package function | R Documentation
Suite of R functions for the estimation of the local false discovery rate (LFDR) using Type II maximum likelihood estimation (MLE):
Inference to the best explanation of the evidence
The p value and Bayesian methods have well known drawbacks when it comes to measuring the strength of the evidence supporting one hypothesis over another. To overcome those drawbacks, this paper proposes an alternative method of quantifying how much support a hypothesis has from evidence consisting of data.
D. R. Bickel, “The strength of statistical evidence for composite hypotheses: Inference to the best explanation,” Statistica Sinica 22, 1147-1198 (2012). Full article | 2010 version
The special law of likelihood has many advantages over more commonly used approaches to measuring the strength of statistical evidence. However, it only can measure the support of a hypothesis that corresponds to a single distribution. The proposed general law of likelihood also can measure the extent to which the data support a hypothesis that corresponds to multiple distributions. That is accomplished by formalizing inference to the best explanation.
How to make decisions using somewhat reliable posterior distributions
D. R. Bickel, “Departing from Bayesian inference toward minimaxity to the extent that the posterior distribution is unreliable,” Working Paper, University of Ottawa, <hal-01673783> https://hal.archives-ouvertes.fr/hal-01673783 (2017). 2017 preprint
Uncertainty propagation for empirical Bayes interval estimates: A fiducial approach
D. R. Bickel, “Confidence distributions applied to propagating uncertainty to inference based on estimating the local false discovery rate: A fiducial continuum from confidence sets to empirical Bayes set estimates as the number of comparisons increases,” Communications in Statistics – Theory and Methods 46, 10788-10799 (2017). Published article | Free access (limited time) | 2014 preprint
Two problems confronting the eclectic approach to statistics result from its lack of a unifying theoretical foundation. First, there is typically no continuity between a p-value reported as a level of evidence for a hypothesis in the absence of the information needed to estimate a relevant prior on one hand and an estimated posterior probability of a hypothesis reported in the presence of such information on the other hand. Second, the empirical Bayes methods recommended do not propagate the uncertainty due to estimating the prior.
The latter problem is addressed by applying a coherent form of fiducial inference to hierarchical models, yielding empirical Bayes set estimates that reflect uncertainty in estimating the prior. Plugging in the maximum likelihood estimator, while not propagating that uncertainty, provides continuity from single comparisons to large numbers of comparisons.
You must be logged in to post a comment.