## LFDR.MLE-package function | R Documentation

Suite of R functions for the estimation of the local false discovery rate (LFDR) using Type II maximum likelihood estimation (MLE):

## Inference to the best explanation of the evidence

The *p* value and Bayesian methods have well known drawbacks when it comes to measuring the strength of the evidence supporting one hypothesis over another. To overcome those drawbacks, this paper proposes an alternative method of quantifying how much support a hypothesis has from evidence consisting of data.

D. R. Bickel, “The strength of statistical evidence for composite hypotheses: Inference to the best explanation,” *Statistica Sinica* **22**, 1147-1198 (2012). Full article | 2010 version

The special law of likelihood has many advantages over more commonly used approaches to measuring the strength of statistical evidence. However, it only can measure the support of a hypothesis that corresponds to a single distribution. The proposed general law of likelihood also can measure the extent to which the data support a hypothesis that corresponds to multiple distributions. That is accomplished by formalizing inference to the best explanation.

## How to make decisions using somewhat reliable posterior distributions

D. R. Bickel, “Departing from Bayesian inference toward minimaxity to the extent that the posterior distribution is unreliable,” Working Paper, University of Ottawa, <hal-01673783>** **https://hal.archives-ouvertes.fr/hal-01673783 (2017). 2017 preprint

## Uncertainty propagation for empirical Bayes interval estimates: A fiducial approach

D. R. Bickel, “Confidence distributions applied to propagating uncertainty to inference based on estimating the local false discovery rate: A fiducial continuum from confidence sets to empirical Bayes set estimates as the number of comparisons increases,” *Communications in Statistics – Theory and Methods* **46**, 10788-10799 (2017). Published article | Free access (limited time) | 2014 preprint

Two problems confronting the eclectic approach to statistics result from its lack of a unifying theoretical foundation. First, there is typically no continuity between a p-value reported as a level of evidence for a hypothesis in the absence of the information needed to estimate a relevant prior on one hand and an estimated posterior probability of a hypothesis reported in the presence of such information on the other hand. Second, the empirical Bayes methods recommended do not propagate the uncertainty due to estimating the prior.

The latter problem is addressed by applying a coherent form of fiducial inference to hierarchical models, yielding empirical Bayes set estimates that reflect uncertainty in estimating the prior. Plugging in the maximum likelihood estimator, while not propagating that uncertainty, provides continuity from single comparisons to large numbers of comparisons.

## What’s the goal of statistics in scientific applications?

the goal [of statistical inference in science] is not to infer highly probable claims (in the formal sense)* but claims which have been highly probed and have passed severe probes

Source: Deborah G. Mayo’s Performance or Probativeness? E.S. Pearson’s Statistical Philosophy | Error Statistics Philosophy

## “a list of possibly predatory publishers” based on Beall’s List

This is a list of possibly predatory publishers. The kernel for this list was extracted from the archive of Beall’s List at web.archive.org. It will be updated as new information or suggested edits are submitted or found by the maintainers of this site.

Source: List of Predatory Publishers | Stop Predatory Journals (accessed 24 August 2017)