Architectural Support for Optimizing Huge Page Selection Within the OS

Aninda Manocha, Zi Yan, Esin Tureci, Juan L. Aragón, David Nellans, Margaret Martonosi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Irregular, memory-intensive applications often incur high translation lookaside buffer (TLB) miss rates that result in significant address translation overheads. Employing huge pages is an effective way to reduce these overheads, however in real systems the number of available huge pages can be limited when system memory is nearly full and/or fragmented. Thus, huge pages must be used selectively to back application memory. This work demonstrates that choosing memory regions that incur the most TLB misses for huge page promotion best reduces address translation overheads. We call these regions High reUse TLB-sensitive data (HUBs). Unlike prior work which relies on expensive per-page software counters to identify promotion regions, we propose new architectural support to identify these regions dynamically at application runtime. We propose a promotion candidate cache (PCC) that identifies HUB candidates based on hardware page table walks after a last-level TLB miss. This small, fixed-size structure tracks huge page-aligned regions (consisting of N base pages), ranks them based on observed page table walk frequency, and only keeps the most frequently accessed ones. Evaluated on applications of various memory intensity, our approach successfully identifies application pages incurring the highest address translation overheads. Our approach demonstrates that with the help of a PCC, the OS only needs to promote of the application footprint to achieve more than of the peak achievable performance, yielding 1.19-1.33 × speedups over 4KB base pages alone. In real systems where memory is typically fragmented, the PCC outperforms Linux's page promotion policy by (when 50% of total memory is fragmented) and (when 90% of total memory is fragmented) respectively.

Original languageEnglish (US)
Title of host publicationProceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2023
PublisherAssociation for Computing Machinery, Inc
Pages1213-1226
Number of pages14
ISBN (Electronic)9798400703294
DOIs
StatePublished - Oct 28 2023
Event56th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2023 - Toronto, Canada
Duration: Oct 28 2023Nov 1 2023

Publication series

NameProceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2023

Conference

Conference56th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2023
Country/TerritoryCanada
CityToronto
Period10/28/2311/1/23

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Hardware and Architecture
  • Renewable Energy, Sustainability and the Environment

Keywords

  • cache architectures
  • graph processing
  • hardware-software co-design
  • memory management
  • operating systems
  • virtual memory

Fingerprint

Dive into the research topics of 'Architectural Support for Optimizing Huge Page Selection Within the OS'. Together they form a unique fingerprint.

Cite this