LLMCompass: Enabling Efficient Hardware Design for Large Language Model Inference

Research output: Chapter in Book/Report/Conference proceedingConference contribution

31 Scopus citations

Abstract

The past year has witnessed the increasing popularity of Large Language Models (LLMs). Their unprecedented scale and associated high hardware cost have impeded their broader adoption, calling for efficient hardware designs. With the large hardware needed to simply run LLM inference, evaluating different hardware designs becomes a new bottleneck. This work introduces LLMCompass1, a hardware evaluation framework for LLM inference workloads. LLMCompass is fast, accurate, versatile, and able to describe and evaluate different hardware designs. LLMCompass includes a mapper to automatically find performance-optimal mapping and scheduling. It also incorporates an area-based cost model to help architects reason about their design choices. Compared to real-world hardware, LLMCompass' estimated latency achieves an average 10.9% error rate across various operators with various input sizes and an average 4.1% error rate for LLM inference. With LLMCompass, simulating a 4-NVIDIA A100 GPU node running GPT-3 175B inference can be done within 16 minutes on commodity hardware, including 26,400 rounds of the mapper's parameter search. With the aid of LLMCompass, this work draws architectural implications and explores new cost-effective hardware designs. By reducing the compute capability or replacing High Bandwidth Memory (HBM) with traditional DRAM, these new designs can achieve as much as 3.41x improvement in performance/cost compared to an NVIDIA A100, making them promising choices for democratizing LLMs.1Available at https://github.com/PrincetonUniversity/LLMCompass.

Original languageEnglish (US)
Title of host publicationProceeding - 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture, ISCA 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1080-1096
Number of pages17
ISBN (Electronic)9798350326581
DOIs
StatePublished - 2024
Event51st ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2024 - Buenos Aires, Argentina
Duration: Jun 29 2024Jul 3 2024

Publication series

NameProceedings - International Symposium on Computer Architecture
ISSN (Print)1063-6897
ISSN (Electronic)2575-713X

Conference

Conference51st ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2024
Country/TerritoryArgentina
CityBuenos Aires
Period6/29/247/3/24

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture

Keywords

  • Large language model
  • accelerator
  • area model
  • cost model
  • performance model

Fingerprint

Dive into the research topics of 'LLMCompass: Enabling Efficient Hardware Design for Large Language Model Inference'. Together they form a unique fingerprint.

Cite this