Abstract
Automatic parallelization has shown promise in producing scalable multi-threaded programs for multi-core architectures. Most existing automatic techniques parallelize independent loops and insert global synchronization between loop invocations. For programs with many loop invocations, frequent synchronization often becomes the performance bottleneck. Some techniques exploit cross-invocation parallelism to overcome this problem. Using static analysis, they partition iterations among threads to avoid cross-thread dependences. However, this approach may fail if dependence pattern information is not available at compile time. To address this limitation, this work proposes SpecCross-the first automatic parallelization technique to exploit cross-invocation parallelism using speculation. With speculation, iterations from different loop invocations can execute concurrently, and the program synchronizes only on misspeculation. This allows SpecCross to adapt to dependence patterns that only manifest on particular inputs at runtime. Evaluation on eight programs shows that SpecCross achieves a geomean speedup of 3.43x over parallel execution without cross-invocation parallelization.
Original language | English (US) |
---|---|
Pages (from-to) | 207-219 |
Number of pages | 13 |
Journal | Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT |
DOIs | |
State | Published - 2016 |
Event | 25th International Conference on Parallel Architectures and Compilation Techniques, PACT 2016 - Haifa, Israel Duration: Sep 11 2016 → Sep 15 2016 |
All Science Journal Classification (ASJC) codes
- Software
- Theoretical Computer Science
- Hardware and Architecture
Keywords
- Automatic parallelization
- barrier speculation
- code generation