Abstract
The rapidly growing importance of machine learning (ML) applications, coupled with their ever-increasing model size and inference energy footprint, has created a strong need for specialized ML hardware architectures. Numerous ML accelerators have been explored and implemented, primarily to increase task-level throughput per unit area and reduce task-level energy consumption. This article surveys key trends toward these objectives for more efficient ML accelerators and provides a unifying framework to understand how compute and memory technologies/architectures interact to enhance system-level efficiency and performance. To achieve this, this article introduces an enhanced version of the roofline model and applies it to ML accelerators as an effective tool for understanding where various execution regimes fall within roofline bounds and how to maximize performance and efficiency under the roofline. Key concepts are illustrated with examples from state-of-the-art (SOTA) designs, with a view toward open research opportunities to further advance accelerator performance.
Original language | English (US) |
---|---|
Pages (from-to) | 1888-1905 |
Number of pages | 18 |
Journal | IEEE Journal of Solid-State Circuits |
Volume | 60 |
Issue number | 6 |
DOIs | |
State | Published - 2025 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Electrical and Electronic Engineering
Keywords
- Chip design
- energy efficiency
- machine learning (ML) accelerators
- memory hierarchy
- processor architectures
- quantization
- roofline model
- sparsity
- throughput