With the widening performance gap between processors and main memory, efficient memory accessing behavior is necessary for good program performance. Both hand-tuning and compiler optimization techniques are often used to transform codes to improve memory performance. Effective transformations require detailed knowledge about the frequency and causes of cache misses in the code. This paper describes methods for generating and solving Cache Miss equations that give a detailed representation of the cache misses in loop-oriented scientific code. Implemented within the SUIF compiler framework, our approach extends on traditional compiler reuse analysis to generate linear Diophantine equations that summarize each loop's memory behavior. Mathematical techniques for manipulating Diophantine equations allow us to compute the number of possible solutions, where each solution corresponds to a potential cache miss. These equations provide a general framework to guide code optimizations for improving cache performance. The paper gives examples of their use to determine array padding and offset amounts that minimize cache misses, and also to determine optimal blocking factors for tiled code. Overall, these equations represent an analysis framework that is more precise than traditional memory behavior heuristics, and is also potentially faster than simulation.

Original languageEnglish (US)
Number of pages8
StatePublished - 1997
EventProceedings of the 1997 International Conference on Supercomputing - Vienna, Austria
Duration: Jul 7 1997Jul 11 1997


OtherProceedings of the 1997 International Conference on Supercomputing
CityVienna, Austria

All Science Journal Classification (ASJC) codes

  • General Computer Science


Dive into the research topics of 'Cache miss equations: an analytical representation of cache misses'. Together they form a unique fingerprint.

Cite this