First page Back Continue Last page Overview Image

Pheno(4)

arXiv:2002.09713

Connections between statistical practice in elementary particle physics and the severity concept as discussed in Mayo's Statistical Inference as Severe Testing

Robert Cousins

For many years, philosopher-of-statistics Deborah Mayo has been advocating the concept of severe testing as a key part of hypothesis testing. Her recent book, Statistical Inference as Severe Testing, is a comprehensive exposition of her arguments in the context of a historical study of many threads of statistical inference, both frequentist and Bayesian. Her foundational point of view is called error statistics, emphasizing frequentist evaluation of the errors called Type I and Type II in the Neyman-Pearson theory of frequentist hypothesis testing. Since the field of elementary particle physics (also known as high energy physics) has strong traditions in frequentist inference, one might expect that something like the severity concept was independently developed in the field. Indeed, I find that, at least operationally (numerically), we high-energy physicists have long interpreted data in ways that map directly onto severity. Whether or not we subscribe to Mayo's philosophical interpretations of severity is a more complicated story that I do not address here.