Collective operations are essential for interaction patterns in parallel programming, particularly in SPMD algorithms. Efficient implementations of these operations are highly sought after, and the Message Passing Interface (MPI) offers a solution by providing a realization of collective operations.
UC Berkeley
Spring 2020
The course addresses programming parallel computers to solve complex scientific and engineering problems. It covers an array of parallelization strategies for numerical simulation, data analysis, and machine learning, and provides experience with popular parallel programming tools.
No concepts data
+ 36 more concepts