TY - GEN
T1 - Shared last-level TLBs for chip multiprocessors
AU - Bhattacharjee, Abhishek
AU - Lustig, Daniel
AU - Martonosi, Margaret
PY - 2011
Y1 - 2011
N2 - Translation Lookaside Buffers (TLBs) are critical to processor performance. Much past research has addressed uniprocessor TLBs, lowering access times and miss rates. However, as chip multiprocessors (CMPs) become ubiquitous, TLB design must be re-evaluated. This paper is the first to propose and evaluate shared last-level (SLL) TLBs as an alternative to the commercial norm of private, per-core L2 TLBs. SLL TLBs eliminate 7-79% of system-wide misses for parallel workloads. This is an average of 27% better than conventional private, per-core L2 TLBs, translating to notable runtime gains. SLL TLBs also provide benefits comparable to recently-proposed Inter-Core Cooperative (ICC) TLB prefetchers, but with considerably simpler hardware. Furthermore, unlike these prefetchers, SLL TLBs can aid sequential applications, eliminating 35-95% of the TLB misses for various multiprogrammed combinations of sequential applications. This corresponds to a 21% average increase in TLB miss eliminations compared to private, per-core L2 TLBs. Because of their benefits for parallel and sequential applications, and their readily-implementable hardware, SLL TLBs hold great promise for CMPs.
AB - Translation Lookaside Buffers (TLBs) are critical to processor performance. Much past research has addressed uniprocessor TLBs, lowering access times and miss rates. However, as chip multiprocessors (CMPs) become ubiquitous, TLB design must be re-evaluated. This paper is the first to propose and evaluate shared last-level (SLL) TLBs as an alternative to the commercial norm of private, per-core L2 TLBs. SLL TLBs eliminate 7-79% of system-wide misses for parallel workloads. This is an average of 27% better than conventional private, per-core L2 TLBs, translating to notable runtime gains. SLL TLBs also provide benefits comparable to recently-proposed Inter-Core Cooperative (ICC) TLB prefetchers, but with considerably simpler hardware. Furthermore, unlike these prefetchers, SLL TLBs can aid sequential applications, eliminating 35-95% of the TLB misses for various multiprogrammed combinations of sequential applications. This corresponds to a 21% average increase in TLB miss eliminations compared to private, per-core L2 TLBs. Because of their benefits for parallel and sequential applications, and their readily-implementable hardware, SLL TLBs hold great promise for CMPs.
UR - http://www.scopus.com/inward/record.url?scp=79955889568&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79955889568&partnerID=8YFLogxK
U2 - 10.1109/HPCA.2011.5749717
DO - 10.1109/HPCA.2011.5749717
M3 - Conference contribution
AN - SCOPUS:79955889568
SN - 9781424494323
T3 - Proceedings - International Symposium on High-Performance Computer Architecture
SP - 62
EP - 73
BT - Proceedings - 17th International Symposium on High-Performance Computer Architecture, HPCA 2011
T2 - 17th International Symposium on High-Performance Computer Architecture, HPCA 2011
Y2 - 12 February 2011 through 16 February 2011
ER -