Computational caches

Amos Waterland, Elaine Angelino, Ekin D. Cubuk, Efthimios Kaxiras, Ryan P. Adams, Jonathan Appavoo, Margo Seltzer

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations


Caching is a well-known technique for speeding up computation. We cache data from file systems and databases; we cache dynamically generated code blocks; we cache page translations in TLBs. We propose to cache the act of computation, so that we can apply it later and in different contexts. We use a state-space model of computation to support such caching, involving two interrelated parts: speculatively memoized predicted/resultant state pairs that we use to accelerate sequential computation, and trained probabilistic models that we use to generate predicted states from which to speculatively execute. The key techniques that make this approach feasible are designing probabilistic models that automatically focus on regions of program execution state space in which prediction is tractable and identifying state space equivalence classes so that predictions need not be exact.

Original languageEnglish (US)
Title of host publicationProceedings of the 6th International Systems and Storage Conference, SYSTOR 2013
StatePublished - 2013
Externally publishedYes
Event6th Annual International Systems and Storage Conference, SYSTOR 2013 - Haifa, Israel
Duration: Jun 30 2013Jul 2 2013

Publication series

NameACM International Conference Proceeding Series


Other6th Annual International Systems and Storage Conference, SYSTOR 2013

All Science Journal Classification (ASJC) codes

  • Software
  • Human-Computer Interaction
  • Computer Vision and Pattern Recognition
  • Computer Networks and Communications


  • Performance
  • Theory


Dive into the research topics of 'Computational caches'. Together they form a unique fingerprint.

Cite this