How to Keep Pushing ML Accelerator Performance? Know Your Rooflines!

Marian Verhelst, Luca Benini, Naveen Verma

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The rapidly growing importance of machine learning (ML) applications, coupled with their ever-increasing model size and inference energy footprint, has created a strong need for specialized ML hardware architectures. Numerous ML accelerators have been explored and implemented, primarily to increase task-level throughput per unit area and reduce task-level energy consumption. This article surveys key trends toward these objectives for more efficient ML accelerators and provides a unifying framework to understand how compute and memory technologies/architectures interact to enhance system-level efficiency and performance. To achieve this, this article introduces an enhanced version of the roofline model and applies it to ML accelerators as an effective tool for understanding where various execution regimes fall within roofline bounds and how to maximize performance and efficiency under the roofline. Key concepts are illustrated with examples from state-of-the-art (SOTA) designs, with a view toward open research opportunities to further advance accelerator performance.

Original languageEnglish (US)
Pages (from-to)1888-1905
Number of pages18
JournalIEEE Journal of Solid-State Circuits
Volume60
Issue number6
DOIs
StatePublished - 2025
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Keywords

  • Chip design
  • energy efficiency
  • machine learning (ML) accelerators
  • memory hierarchy
  • processor architectures
  • quantization
  • roofline model
  • sparsity
  • throughput

Fingerprint

Dive into the research topics of 'How to Keep Pushing ML Accelerator Performance? Know Your Rooflines!'. Together they form a unique fingerprint.

Cite this