Empirical Bayes single-comparison procedure

1 July 2016 Comments off

D. R. Bickel, “Small-scale inference: Empirical Bayes and confidence methods for as few as a single comparison,” International Statistical Review 82, 457-476 (2014). Published version2011 preprint

Parametric empirical Bayes methods of estimating the local false discovery rate by maximum likelihood apply not only to the large-scale settings for which they were developed, but, with a simple modification, also to small numbers of comparisons. In fact, data for a single comparison are sufficient under broad conditions, as seen from applications to measurements of the abundance levels of 20 proteins and from simulation studies with confidence-based inference as the competitor.

Adaptively selecting an empirical Bayes reference class

1 June 2016 Comments off

F. A. Aghababazadeh, M. Alvo, and D. R. Bickel, “Estimating the local false discovery rate via a bootstrap solution to the reference class problem,” Working Paper, University of Ottawa, deposited in uO Research at http://hdl.handle.net/10393/34295 (2016). 2016 preprint

Categories: empirical Bayes, preprints

Empirical Bayes software (R packages)

1 May 2016 Comments off
Categories: empirical Bayes, software

Frequentist inference principles

2 April 2016 Comments off
Reid, Nancy; Cox, David R.
On some principles of statistical inference.
Int. Stat. Rev. 83 (2015), no. 2, 293–308.
62A01 (62F05 62F15 62F25)

 

Reid and Cox bear the standard of a broad Fisherian school of frequentist statistics embracing not only time-tested confidence intervals and p values derived from parametric models, perfected by higher-order asymptotics, but also such developments as false discovery rates and modern versions of the fiducial argument [see S. Nadarajah, S. I. Bityukov and N. V. Krasnikov, Stat. Methodol. 22 (2015), 23–46; MR3261595]. To defend this confederation, they wield inference principles against rival visions of frequentism as well as against Bayesianism.
While agreeing with other frequentists on the necessity of guaranteeing good performance over repeated sampling, Reid and Cox also value neglected rules of inference such as the conditionality principle. Against the steady advance of nonparametric methods, Reid and Cox point to the interpretive power of parametric models.Frequentist decision theory is only mentioned. Glimpses of the authors’ perspectives on that appear in [D. R. Cox, Principles of statistical inference, Cambridge Univ. Press, Cambridge, 2006 (8.2); MR2278763 (2007g:62007)] and [N. M. Reid, Statist. Sci. 9 (1994), no. 3, 439–455; MR1325436 (95m:01020)].On the Bayes front, Reid and Cox highlight the success frequentist methods have enjoyed in scientific applications as a decisive victory over those Bayesian methods that are most consistent with their subjectivist foundations. Indeed, no one can deny what C. Howson and P. Urbach call the “social success” of frequentist methods [Scientific reasoning: the Bayesian approach, third edition, Open Court, Chicago, IL, 2005 (p. 9)]. Reid and Cox do not attribute their widespread use in scientific practice to political factors.

Rather, for scientific inference as opposed to individual decision making, they find frequentist methods more suitable in principle than fully Bayesian methods. For while the need for an agent to reach a decision recognizes no line between models of the phenomena under study and models of an agent’s thought, science requires clear reporting on the basis of the former without introducing biases from the latter. Although subjective considerations admittedly come into play in interpreting reports of statistical analyses, a dependence of the reports themselves on such considerations conflicts with scientific methodology. In short, the Bayesian theories supporting personal inference are irrelevant as far as science is concerned even if they are useful in personal decision making. This viewpoint stops short of that of Philip Stark, who went as far as to call the practicality of that private application of Bayesian inference into question [SIAM/ASA J. Uncertain. Quantif. 3 (2015), no. 1, 586–598; MR3372107].

On reference priors designed to minimize subjective input, Reid and Cox point out that those that perform well with low-dimensional parameters can fail in high dimensions. Eliminating the prior entirely leads to the pure likelihood approach, which, based on the strong likelihood principle, limits the scope even further, to problems with a scalar parameter of interest and no nuisance parameters [A. W. F. Edwards, Likelihood. An account of the statistical concept of likelihood and its application to scientific inference, Cambridge Univ. Press, London, 1972; MR0348869 (50 #1363)]. More recent developments of that approach were explained by R. M. Royall [Statistical evidence, Monogr. Statist. Appl. Probab., 71, Chapman & Hall, London, 1997; MR1629481 (99f:62012)] and C. A. Rohde [Introductory statistical inference with the likelihood function, Springer, Cham, 2014 (Chapter 18); MR3243684].

Reid and Cox see some utility in Bayesian methods that have good performance by frequentist standards, noting that such performance can require the prior to depend on which parameter happens to be of interest and, through model checking, on the data. Such dependence raises the question, “Is this, then, Bayesian? The prior distribution will then not represent prior knowledge of the parameter in [that] case, but an understanding of the model” [T. Schweder and N. L. Hjort, Scand. J. Statist. 29 (2002), no. 2, 309–332; MR1909788 (2003d:62085)].

Reviewed by David R. Bickel

This review first appeared at “On some principles of statistical inference” (Mathematical Reviews) and is used with permission from the American Mathematical Society.

False discovery rates are misleadingly low

2 March 2016 Comments off

D. R. Bickel, “Correcting false discovery rates for their bias toward false positives,” Working Paper, University of Ottawa, deposited in uO Research at http://hdl.handle.net/10393/34277 (2016). 2016 preprint

Statistics & biostatistics graduate student stipends

1 February 2016 Comments off

Reliable interpretation of genomic information makes unprecedented demands for innovations in statistical methodology and its application to biological systems. This unique opportunity drives research at the Statomics Lab of the Ottawa Institute of Systems Biology (http://www.davidbickel.com/). The Statomics Lab seeks students who will conduct original research involving the application of novel statistical methods to the analysis transcriptomics, proteomics, metabolomics, and/or genome-wide-association data while earning a graduate degree in Mathematics and Statistics. The page https://davidbickel.com/career/ has information on careers in statistics and biostatistics.

Intellectual curiosity and high mathematical aptitude are essential, as is the ability to quickly code and debug computer programs. Strong self motivation, good communication skills, and a degree in bioinformatics, computer science, mathematics, physics, statistics, any field of engineering, or an equally quantitative field are also absolutely necessary. The following qualities are desirable but not required: coursework in computer science, numerical methods, numerical analysis, software engineering, statistics, and/or biology; familiarly with BUGS, R, S-PLUS, C, Fortran, and/or LaTeX; experience with UNIX or Linux.

To be considered, send a PDF CV that has your GPA and contact information of two references to dbickel@uOttawa.ca with the degree sought (MSc or PhD) in the Subject line of the message and with a cover letter in the body of the message. Only those students selected for further consideration will receive a response.

Categories: applications welcome

Coherent inference after checking a prior

7 January 2016 Comments off
Follow

Get every new post delivered to your Inbox.