Archive for the ‘proteomics’ Category

Small-scale inference

5 April 2011 Leave a comment

D. R. Bickel, “Small-scale inference: Empirical Bayes and confidence methods for as few as a single comparison,” Technical Report, Ottawa Institute of Systems Biology, arXiv:1104.0341 (2011). Full preprint

Parametric empirical Bayes methods of estimating the local false discovery rate by maximum likelihood apply not only to the multiple comparison settings for which they were developed, but, with a simple modification, also to small numbers of comparisons. In fact, data for a single comparison are sufficient under broad conditions, as seen from applications to measurements of the abundance levels of 20 proteins and from simulation studies with confidence-based inference as the competitor.

Normalized maximum weighted likelihood

8 October 2010 1 comment

D. R. Bickel, “Statistical inference optimized with respect to the observed sample for single or multiple comparisons,” Technical Report, Ottawa Institute of Systems Biology, arXiv:1010.0694 (2010). Full preprint

Medium-scale simultaneous inference

14 August 2010 3 comments

D. R. Bickel, “Minimum description length methods of medium-scale simultaneous inference,” Technical Report, Ottawa Institute of Systems Biology, available at (2010). Full preprint

Abstract— Nonparametric statistical methods developed for analyzing data for high numbers of genes, SNPs, or other biological features tend to have low efficiency for data with the smaller numbers of features such as proteins, metabolites, or, when expression is measured with conventional instruments, genes. For this medium-scale inference problem, the minimum description length (MDL) framework quantifies the amount of information in the data supporting a null or alternative hypothesis for each feature in terms of parametric model selection. Two new MDL techniques are proposed. First, using test statistics that are highly informative about the parameter of interest, the data are reduced to a single statistic per feature. This simplifying step is already implicit in conventional hypothesis testing and has been found effective in empirical Bayes applications to genomics data. Second, the codelength difference between the alternative and null hypotheses of any given feature can take advantage of information in the measurements from all other features by using those measurements to find the overall code of minimum length summed over those features. The techniques are applied to protein abundance data, demonstrating that a computationally efficient approximation that is close for a sufficiently large number of features works well even when the number of features is as low as 20.

Keywords: information criteria; minimum description length; model selection; reduced likelihood