Abstract
This paper studies hypothesis testing and parameter estimation in the context of the divide-and-conquer algorithm. In a unified likelihood-based framework, we propose new test statistics and point estimators obtained by aggregating various statistics from k subsamples of size n/k, where n is the sample size. In both low dimensional and sparse high dimensional settings, we address the important question of how large k can be, as n grows large, such that the loss of efficiency due to the divide-and-conquer algorithm is negligible. In other words, the resulting estimators have the same inferential efficiencies and estimation rates as an oracle with access to the full sample. Thorough numerical results are provided to back up the theory.
Original language | English (US) |
---|---|
Pages (from-to) | 1352-1382 |
Number of pages | 31 |
Journal | Annals of Statistics |
Volume | 46 |
Issue number | 3 |
DOIs | |
State | Published - Jun 2018 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- Statistics, Probability and Uncertainty
Keywords
- Debiasing
- Divide and conquer
- Massive data
- Thresholding