TY - GEN
T1 - Message passing vs. shared address space on a cluster of SMPs
AU - Shan, H.
AU - Singh, Jaswinder Pal
AU - Oliker, L.
AU - Biswas, R.
N1 - Funding Information:
The work of the first two authors is supported by NSF under grant ESS-9806751. The second author is also supported by PECASE and a Sloan Research Fellowship. The work of the third author is supported by the U.S. Department of Energy under contract DE-AC03-76SF00098.
Publisher Copyright:
© 2001 IEEE.
PY - 2001
Y1 - 2001
N2 - The emergence of scalable computer architectures using clusters of PCs (or PC-SMPs) with commodity networking has made them attractive platforms for high-end scientific computing. Currently, message passing (MP) and shared address space (SAS) are the two leading programming paradigms for these systems. MP has been standardized with MPI, and is the most common and mature parallel programming approach. However, MP code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and programming effort required for six applications under both programming models on a 32-CPU PC-SMP cluster. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications; however on certain classes of problems, SAS performance is competitive with MPI.
AB - The emergence of scalable computer architectures using clusters of PCs (or PC-SMPs) with commodity networking has made them attractive platforms for high-end scientific computing. Currently, message passing (MP) and shared address space (SAS) are the two leading programming paradigms for these systems. MP has been standardized with MPI, and is the most common and mature parallel programming approach. However, MP code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and programming effort required for six applications under both programming models on a 32-CPU PC-SMP cluster. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications; however on certain classes of problems, SAS performance is competitive with MPI.
UR - http://www.scopus.com/inward/record.url?scp=33746683169&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33746683169&partnerID=8YFLogxK
U2 - 10.1109/IPDPS.2001.925009
DO - 10.1109/IPDPS.2001.925009
M3 - Conference contribution
AN - SCOPUS:33746683169
T3 - Proceedings - 15th International Parallel and Distributed Processing Symposium, IPDPS 2001
BT - Proceedings - 15th International Parallel and Distributed Processing Symposium, IPDPS 2001
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 15th International Parallel and Distributed Processing Symposium, IPDPS 2001
Y2 - 23 April 2001 through 27 April 2001
ER -