Archive

Archive for the ‘Types of data’ Category

How to choose features or p values for empirical Bayes estimation of the local false discovery rate

1 December 2018 Leave a comment

Inference to the best explanation of the evidence

1 February 2018 Leave a comment

The p value and Bayesian methods have well known drawbacks when it comes to measuring the strength of the evidence supporting one hypothesis over another. To overcome those drawbacks, this paper proposes an alternative method of quantifying how much support a hypothesis has from evidence consisting of data.

image

D. R. Bickel, “The strength of statistical evidence for composite hypotheses: Inference to the best explanation,” Statistica Sinica 22, 1147-1198 (2012). Full article2010 version

The special law of likelihood has many advantages over more commonly used approaches to measuring the strength of statistical evidence. However, it only can measure the support of a hypothesis that corresponds to a single distribution. The proposed general law of likelihood also can measure the extent to which the data support a hypothesis that corresponds to multiple distributions. That is accomplished by formalizing inference to the best explanation.

Read more…

Estimates of the local false discovery rate based on prior information: Application to GWAS

1 August 2016 Leave a comment

Empirical Bayes single-comparison procedure

1 July 2016 Leave a comment

D. R. Bickel, “Small-scale inference: Empirical Bayes and confidence methods for as few as a single comparison,” International Statistical Review 82, 457-476 (2014). Published version2011 preprint | Simple explanation (link added 21 June 2017)

Parametric empirical Bayes methods of estimating the local false discovery rate by maximum likelihood apply not only to the large-scale settings for which they were developed, but, with a simple modification, also to small numbers of comparisons. In fact, data for a single comparison are sufficient under broad conditions, as seen from applications to measurements of the abundance levels of 20 proteins and from simulation studies with confidence-based inference as the competitor.

Adaptively selecting an empirical Bayes reference class

1 June 2016 Leave a comment

False discovery rates are misleadingly low

2 March 2016 Leave a comment

Meaningful constraints and meaningless priors

4 December 2015 Leave a comment
Stark, Philip B.
Constraints versus priors.
SIAM/ASA J. Uncertain. Quantif. 3 (2015), no. 1, 586–598.
62A01 (62C10 62C20 62G15)

In this lucid expository paper, Stark advances several arguments for using frequentist methods instead of Bayesian methods in statistical inference and decision problems. The main examples involve restricted-parameter problems, those of inferring the value of a parameter of interest that is constrained to lie in an unusually restrictive set. When the parameter is restricted, frequentist methods can lead to solutions markedly different from those of Bayesian methods. For even when the prior distribution is a default intended to be weakly informative, it actually carries substantial information.

Stark calls routine Bayesian practice into question since priors are not selected according to the analyst’s beliefs but rather for reasons that have no apparent support from the Dutch book argument, the featured rationale for Bayesianism as a rational norm (pp. 589–590; [see D. V. Lindley, Understanding uncertainty, revised edition, Wiley Ser. Probab. Stat., Wiley, Hoboken, NJ, 2014; MR3236718]). Uses of the prior beyond the scope of the paper include those encoding (1) empirical Bayes estimates of parameter variability [e.g., B. Efron, Large-scale inference, Inst. Math. Stat. Monogr., 1, Cambridge Univ. Press, Cambridge, 2010; MR2724758 (2012a:62006)], (2) the beliefs of subject-matter experts [e.g., A. O’Hagan et al., Uncertain judgements: eliciting experts’ probabilities, Wiley, West Sussex, 2006, doi:10.1002/0470033312], or (3) the beliefs of archetypical agents of wide scientific interest [e.g., D. J. Spiegelhalter, K. R. Abrams and J. P. Myles, Bayesian approaches to clinical trials and health-care evaluation, Wiley, West Sussex, 2004 (Section 5.5), doi:10.1002/0470092602].

Stark finds Bayesianism to lack not only normative force but also descriptive power. He stresses that he does not know anyone who updates personal beliefs according to Bayes’s theorem in everyday life (pp. 588, 590).

In the conclusions section, Stark asks, “Which is the more interesting question: what would happen if Nature generated a new value of the parameter and the data happened to remain the same, or what would happen for the same value of the parameter if the measurement were repeated?” For the Bayesian who sees parameter distributions more in terms of beliefs than random events, the missing question is, “What should one believe about the value of a parameter given what happened and the information encoded in the prior and other model specifications?” That question would interest Stark only to the extent that the prior encodes meaningful information (p. 589).

Reviewed by David R. Bickel

This review first appeared at “Constraints versus priors” (Mathematical Reviews) and is used with permission from the American Mathematical Society.

Maximum entropy over a set of posteriors

10 August 2015 Leave a comment

D. R. Bickel, “Blending Bayesian and frequentist methods according to the precision of prior information with applications to hypothesis testing,” Statistical Methods & Applications 24, 523-546 (2015). Published article2012 preprint | 2011 preprint | Slides | Simple explanation

SMA

This framework of statistical inference facilitates the development of new methodology to bridge the gap between the frequentist and Bayesian theories. As an example, a simple and practical method for combining p-values with a set of possible posterior probabilities is provided.

In this general approach, Bayesian inference is used when the prior distribution is known, frequentist inference is used when nothing is known about the prior, and both types of inference are blended according to game theory when the prior is known to be a member of some set. (The robust Bayes framework represents knowledge about a prior in terms of a set of possible priors.) If the benchmark posterior that corresponds to frequentist inference lies within the set of Bayesian posteriors derived from the set of priors, then the benchmark posterior is used for inference. Otherwise, the posterior within that set that minimizes the cross entropy to the benchmark posterior is used for inference.

Small-scale empirical Bayes & fiducial estimators

22 March 2015 Leave a comment

Self-consistent frequentism without fiducialism

3 September 2014 Leave a comment