Publication:
Monitoring collective communication among GPUs

Placeholder

School / College / Institute

Organizational Unit

Program

KU Authors

Co-Authors

N/A

Publication Date

Language

Embargo Status

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

Communication among devices in multi-GPU systems plays an important role in terms of performance and scalability. In order to optimize an application, programmers need to know the type and amount of the communication happening among GPUs. Although there are prior works to gather this information in MPI applications on distributed systems and multi-threaded applications on shared memory systems, there is no tool that identifies communication among GPUs. Our prior work, CoMSCRIBE, presents a point-to-point (P2P) communication detection tool for GPUs sharing a common host. In this work, we extend CoMSCRIBE to identify communication among GPUs for collective and P2P communication primitives in NVIDIA's NCCL library. In addition to P2P communications, collective communications are commonly used in HPC and AI workloads thus it is important to monitor the induced data movement due to collectives. Our tool extracts the size and the frequency of data transfers in an application and visualizes them as a communication matrix. To demonstrate the tool in action, we present communication matrices and some statistics for two applications coming from machine translation and image classification domains.

Source

Publisher

Springer International Publishing Ag

Subject

Computer science

Citation

Has Part

Source

Euro-Par 2021: Parallel Processing Workshops

Book Series Title

Edition

DOI

10.1007/978-3-031-06156-1_4

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

0

Views

0

Downloads

View PlumX Details