Sufficiency Revisited: Rethinking Statistical Algorithms in the Big Data Era

Publication Type:
Journal Article
Citation:
American Statistician, 2017, 71 (3), pp. 202 - 208
Issue Date:
2017-07-03
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
6_14_2018_Sufficienc.pdfPublished Version968.62 kB
Adobe PDF
© 2017 American Statistical Association. The big data era demands new statistical analysis paradigms, since traditional methods often break down when datasets are too large to fit on a single desktop computer. Divide and Recombine (D&R) is becoming a popular approach for big data analysis, where results are combined over subanalyses performed in separate data subsets. In this article, we consider situations where unit record data cannot be made available by data custodians due to privacy concerns, and explore the concept of statistical sufficiency and summary statistics for model fitting. The resulting approach represents a type of D&R strategy, which we refer to as summary statistics D&R; as opposed to the standard approach, which we refer to as horizontal D&R. We demonstrate the concept via an extended Gamma–Poisson model, where summary statistics are extracted from different databases and incorporated directly into the fitting algorithm without having to combine unit record data. By exploiting the natural hierarchy of data, our approach has major benefits in terms of privacy protection. Incorporating the proposed modelling framework into data extraction tools such as TableBuilder by the Australian Bureau of Statistics allows for potential analysis at a finer geographical level, which we illustrate with a multilevel analysis of the Australian unemployment data. Supplementary materials for this article are available online.
Please use this identifier to cite or link to this item: