Archive
“Tips for presenting at a scientific conference”
Introduction I served as a judge for some of the student presentations at the 2016 Canadian Statistics Student Conference (CSSC). The conference was both a learning opportunity and a networking op…
Source: Tips for presenting at a scientific conference | The Chemical Statistician
“A Litany of Problems With p-values”
Bayesian, likelihoodist, and frequentist views appear in the comments in Statistical Thinking: A Litany of Problems With p-values.
“The Fiducialist Papers” archived in favor of “sIBEe”
The name of the website The Fiducialist Papers: Evidence and Likelihood was just broadened to Statistical Inference to the Best Explanation of the Evidence (sIBEe).
Should the default significance level be changed from 0.05 to 0.005?
My comments in this discussion of “Redefine statistical significance”:
The call for smaller significance levels cannot be based only on mathematical arguments that p values tend to be much lower than posterior probabilities, as Andrew Gelman and Christian Robert pointed out in their comment (“Revised evidence for statistical standards”).
In the rejoinder, Valen Johnson made it clear that the call is also based on empirical findings of non-reproducible research results. How many of those findings are significant at the 0.005 level? Should meta-analysis have a less stringent standard?
…
“Irreplicable results can’t possibly add empirical clout to the mathematical argument unless it is already known or assumed to be caused by a given cut-off, and further, that lowering it would diminish those problems.”
The preprint cites empirical results to support its use of the 1:10 prior odds. If that is in fact a reliable estimate of the prior odds for the reference class of previous studies, then, in the absence of other relevant information, it would be reasonable to use as input for Bayes’s theorem.
John Byrd asks, “Is 1:10 replicable?” Is it important to ask whether a 1:1 prior odds can be rejected at the 0.005 significance level?
END
The Fiducialist Papers: Evidence and Likelihood
“The Fiducialist Papers” was just added to the name of the Evidence and Likelihood website.
Do models have probabilities or just possibilities?
Andrew says: David:I don’t think it makes sense to talk of the probability of a model. See this paper with Shalizi for much discussion of this point.
David Bickel says: If models do not have probabilities, perhaps they have possibilities in the sense of possibility theory. For example, the possibility of a model might be a function of its adequacy according to a model checking procedure: Appendix B of https://goo.gl/5s7bS3
What’s the goal of statistics in scientific applications?
the goal [of statistical inference in science] is not to infer highly probable claims (in the formal sense)* but claims which have been highly probed and have passed severe probes
Source: Deborah G. Mayo’s Performance or Probativeness? E.S. Pearson’s Statistical Philosophy | Error Statistics Philosophy
“a list of possibly predatory publishers” based on Beall’s List
This is a list of possibly predatory publishers. The kernel for this list was extracted from the archive of Beall’s List at web.archive.org. It will be updated as new information or suggested edits are submitted or found by the maintainers of this site.
Source: List of Predatory Publishers | Stop Predatory Journals (accessed 24 August 2017)
“Can You Change Your Bayesian Prior?”
Sometimes. A subjective Bayesian encountering completely unexpected data changes the prior:In the philosophy literature, that has been compared to changing the premises of a deductive argument. It has been argued that just as one may revise a premise without abandoning deductive logic as a tool, one may revise a prior without abandoning Bayesian updating as a tool.
SSC 2017 talk on the misleading nature of false discovery rates
Planned for today’s SSC 2017 session Statistical Methods for Omics Data (Room E3 270):
“Correcting false discovery rates for their bias toward false positives”
You must be logged in to post a comment.