TY - GEN
T1 - ComputeDRAM
T2 - 52nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2019
AU - Gao, Fei
AU - Tziantzioulis, Georgios
AU - Wentzlaff, David
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/10/12
Y1 - 2019/10/12
N2 - In-memory computing has long been promised as a solution to the Memory Wall problem. Recent work has proposed using chargesharing on the bit-lines of a memory in order to compute in-place and with massive parallelism, all without having to move data across the memory bus. Unfortunately, prior work has required modification to RAM designs (e.g. adding multiple row decoders) in order to open multiple rows simultaneously. So far, the competitive and low-margin nature of the DRAM industry has made commercial DRAM manufacturers resist adding any additional logic into DRAM. This paper addresses the need for in-memory computation with little to no change to DRAM designs. It is the first work to demonstrate in-memory computation with off-theshelf, unmodified, commercial, DRAM. This is accomplished by violating the nominal timing specification and activating multiple rows in rapid succession, which happens to leave multiple rows open simultaneously, thereby enabling bit-line charge sharing. We use a constraint-violating command sequence to implement and demonstrate row copy, logical OR, and logical AND in unmodified, commodity, DRAM. Subsequently, we employ these primitives to develop an architecture for arbitrary, massively-parallel, computation. Utilizing a customized DRAM controller in an FPGA and commodity DRAM modules, we characterize this opportunity in hardware for all major DRAM vendors. This work stands as a proof of concept that in-memory computation is possible with unmodified DRAM modules and that there exists a financially feasible way for DRAM manufacturers to support in-memory compute.
AB - In-memory computing has long been promised as a solution to the Memory Wall problem. Recent work has proposed using chargesharing on the bit-lines of a memory in order to compute in-place and with massive parallelism, all without having to move data across the memory bus. Unfortunately, prior work has required modification to RAM designs (e.g. adding multiple row decoders) in order to open multiple rows simultaneously. So far, the competitive and low-margin nature of the DRAM industry has made commercial DRAM manufacturers resist adding any additional logic into DRAM. This paper addresses the need for in-memory computation with little to no change to DRAM designs. It is the first work to demonstrate in-memory computation with off-theshelf, unmodified, commercial, DRAM. This is accomplished by violating the nominal timing specification and activating multiple rows in rapid succession, which happens to leave multiple rows open simultaneously, thereby enabling bit-line charge sharing. We use a constraint-violating command sequence to implement and demonstrate row copy, logical OR, and logical AND in unmodified, commodity, DRAM. Subsequently, we employ these primitives to develop an architecture for arbitrary, massively-parallel, computation. Utilizing a customized DRAM controller in an FPGA and commodity DRAM modules, we characterize this opportunity in hardware for all major DRAM vendors. This work stands as a proof of concept that in-memory computation is possible with unmodified DRAM modules and that there exists a financially feasible way for DRAM manufacturers to support in-memory compute.
KW - Bit-serial
KW - DRAM
KW - In-memory computing
KW - Main memory
UR - http://www.scopus.com/inward/record.url?scp=85074454771&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074454771&partnerID=8YFLogxK
U2 - 10.1145/3352460.3358260
DO - 10.1145/3352460.3358260
M3 - Conference contribution
AN - SCOPUS:85074454771
T3 - Proceedings of the Annual International Symposium on Microarchitecture, MICRO
SP - 100
EP - 113
BT - MICRO 2019 - 52nd Annual IEEE/ACM International Symposium on Microarchitecture, Proceedings
PB - IEEE Computer Society
Y2 - 12 October 2019 through 16 October 2019
ER -