Abstract
Big Data bring new opportunities to modern society and challenges to data scientists. On the one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This paper gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogenous assumptions in most statistical methods for Big Data cannot be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.
Original language | English (US) |
---|---|
Pages (from-to) | 293-314 |
Number of pages | 22 |
Journal | National Science Review |
Volume | 1 |
Issue number | 2 |
DOIs | |
State | Published - Jun 1 2014 |
All Science Journal Classification (ASJC) codes
- General
Keywords
- Big Data
- Data storage
- Incidental endogeneity
- Noise accumulation
- Scalability
- Spurious correlation