AHEAD: A Tool for Projecting Next-Generation Hardware Enhancements on GPU-Accelerated Systems

Abstract

Starting with the Titan supercomputer (at the Oak Ridge Leadership Computing Facility, OLCF) in 2012, top supercomputers have Increasingly leveraged the performance of GPUs to support large-scale computational science. The current No. 1 machine, the 200 petaflop Summit system at OLCF, is a GPU-based machine. Accelerator-based architectures, however, add additional complexity due to node heterogeneity. To inform procurement decisions, supercomputing centers need the tools to quickly model the impact of changes of the node architectures on application performance. We present AHEAD, a profiling and modeling tool to quantify the impact of intra-node communication mechanism (e.g., PCI or NVLink) on application performance. Our experiments show average weighted relative errors of ~19% and ~23% for five CORAL-2 (a collaboration between multiple US Department of Energy, DOE, labs to procure Exascale systems) and 12 Rodinia benchmarks respectively, without running the applications on the target future node.

Publication
In 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
Hazem A. Abdelhafez
Hazem A. Abdelhafez
Senior GPU Compiler Engineer

My research interests lie in the intersection of compilers, GPU and heterogeneous computing systems, performance and power consumption modeling and characterization.