Introduction
|
Parallel computing is used to solve problems that are too large for traditional approaches.
There are a variety of techniques for parallel computing.
Different problems may require different techniques.
Amdahl’s law defines the limits of parallel computing performance improvements.
|
Message Passing
|
MPI uses the notion of a rank to distinguish processes.
Send and Recv are the fundumental primitives.
Sending a message from one process to another is known as point-to-point communication.
|
Non-blocking Communication
|
There are different methods for blocking and non-blocking communication.
By overlapping communication and computation, better performance can be achieved.
|
Problem Decomposition
|
Domain decomposition is how data is partitioned.
Functional decomposition is how an algorithm is partitioned.
Some problems are better suited to one type of decomposition than others.
|
Collective Operations
|
Collective communication allows data to be sent or received from multiple processes simultaneously.
Collective operations fall into the broad categories: synchonization, communication, computation, and I/O.
|
Final Notes
|
|