The Performance Advantages of Integrating Block Data Transfer in Cache-Coherent Multiprocessors

Steven Cameron Woo, Jaswinder Pal Singh, John L. Hennessy

Research output: Contribution to journalArticlepeer-review

25 Scopus citations


Integrating support for block data transfer has become an important emphasis in recent cache-coherent shared address space multiprocessors. This paper examines the potential performance benefits of adding this support. A set of ambitious hardware mechanisms is used to study performance gains in five important scientific computations that appear to be good candidates for using block transfer. Our conclusion is that the benefits of block transfer are not substantial for hardware cache-coherent multiprocessors. The main reasons for this are 1994 the relatively modest fraction of time applications spend in communication amenable to block transfer, (ii) the difficulty of finding enough independent computation to overlap with the communication latency that remains after block transfer, and (iii) long cache lines often capture many of the benefits of block transfer in efficient cache-coherent machines. In the cases where block transfer improves performance, prefetching can often provide comparable, if not superior, performance benefits. We also examine the impact of varying important communication parameters and processor speed on the effectiveness of block transfer, and comment on useful features that a block transfer facility should support for real applications.

Original languageEnglish (US)
Pages (from-to)219-229
Number of pages11
JournalSIGPLAN Notices (ACM Special Interest Group on Programming Languages)
Issue number11
StatePublished - Jan 11 1994
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'The Performance Advantages of Integrating Block Data Transfer in Cache-Coherent Multiprocessors'. Together they form a unique fingerprint.

Cite this