ma471fall 2002 lecture5. more point to point communications in mpi note: so far we have covered...

Post on 14-Jan-2016

220 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

MA471Fall 2002

Lecture5

More Point To Point Communications in MPI

• Note: so far we have covered – MPI_Init, MPI_Finalize– MPI_Comm_size, MPI_Comm_rank– MPI_Send, MPI_Recv– MPI_Barrier

• Only MPI_Send and MPI_Recv truly communicate messages..

• These are “point to point” communicationsi.e. process to process communication

MPI_Isend

• Unlike MPI_Send, MPI_Isend does not wait for the output buffer to be free for further use before returning

• This mode of action is known as “non-blocking”

• http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Isend.html

MPI_Isend detailsMPI_Isend: Begins a nonblocking send

Synopsisint MPI_Isend( void *buf, int count, MPI_Datatype datatype, int

dest, int tag, MPI_Comm comm, MPI_Request *request )

Input Parametersbuf initial address of send buffer (choice) count number of elements in send buffer (integer) datatype datatype of each send buffer element (handle) dest rank of destination (integer) tag message tag (integer) comm communicator (handle)

Output Parameterrequest communication request (handle)

http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Isend.html

MPI_Isend analogy

• Analogy time….

• Isend is like calling the mailperson to take a letter away and receiving a tracking number

• You don’t know if the letter is gone until you check your mailbox (i.e. check online with the tracking number).

• When you know the letter is gone you can use the letterbox again… (strained analogy).

MPI_Irecv

• Post a non-blocking receive request

• This routine exits without necessarily completing the message receive

• We use MPI_Wait to see if the requested message is in..

MPI_Irecv detailsMPI_Irecv: Begins a nonblocking receive

Synopsisint MPI_Irecv( void *buf, int count, MPI_Datatype datatype, int source,

int tag, MPI_Comm comm, MPI_Request *request )

Input Parametersbuf initial address of receive buffer (choice) count number of elements in receive buffer (integer) datatype datatype of each receive buffer element (handle) source rank of source (integer) tag message tag (integer) comm communicator (handle)

Output Parameter

Request communication request (handle)

http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Irecv.html

MPI_Irecv analogy

• Analogy time….

• Irecv is like telling the mailbox to anticipate the delivery of a letter.

• You don’t know if the letter has arrived until you check your mailbox (i.e. check online with the tracking number).

• When you know the letter is here you can open it and read it..

MPI_Wait

• Wait for a requested MPI operation to complete

MPI_Wait details

• MPI_Wait: Waits for an MPI send or receive to complete

• Synopsis• int MPI_Wait ( MPI_Request *request, MPI_Status *status)

• Input Parameter• request request (handle)

• Output Parameter• Status status object (Status) . May be MPI_STATUS_NULL.

http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Wait.html

Example: Isend, Irecv, Wait Sequence

MPI_Request ISENDrequest; MPI_Status ISENDstatus; MPI_Request IRECVrequest; MPI_Status IRECVstatus;

char *bufout = strdup("Hello"); int bufoutlength = strlen(bufout); int bufinlength = bufoutlength; /* for convenience */ char *bufin = (char*) calloc(bufinlength, sizeof(char));

int TOprocess = (Nprocs-1) - procID; int TOtag = 10000*procID + TOprocess;

int FROMprocess = (Nprocs-1) - procID; int FROMtag = 10000*FROMprocess + procID;

fprintf(stdout, "Sending: %s To process %d \n", bufout, TOprocess);

info = MPI_Isend(bufout, bufoutlength, MPI_CHAR, TOprocess, TOtag, MPI_COMM_WORLD, &ISENDrequest);

info = MPI_Irecv(bufin, bufinlength, MPI_CHAR, FROMprocess, FROMtag, MPI_COMM_WORLD, &IRECVrequest);

fprintf(stdout, "Process %d just about to wait for requests to finish\n", procID);

MPI_Wait(&IRECVrequest, &IRECVstatus); MPI_Wait(&ISENDrequest, &ISENDstatus);

fprintf(stdout, "Received: %s\n From process: %d\n", bufin, FROMprocess);

The isend status wait is a courtesyto make sure that the messagehas gone out before we go to finalize

Profiling Your Code Using Upshot

• With these parallel codes it can be difficult to foresee every way the code can behave

• In the following we will see upshot in action

• upshot is part of the mpi release (for the most part)

Example 1Profiling MPI_Send and MPI_Recv

Instructions For Using Upshot

Add –mpilog to the compile flags

Clean and RecompileON BLACKBEAR (BB):1) cp –r ~cs471aa/MA471Lec5 ~/2) cd ~/MA471Lec53) make –f Makefile.mpeBB clean4) make –f Makefile.mpeBB5) qsub MPIcommuning 6) % use ‘qstat’ to make sure the run has finished7) clog2alog MPIcommuning8) % make sure that a file MPIcommuning.alog has

been created9) % set up an xserver on your current PC10) upshot MPIcommuning.alog

/* initiate MPI */ int info = MPI_Init(&argc, &argv);

/* NOW We can do stuff */ int Nprocs, procID;

/* find the number of processes */ MPI_Comm_size(MPI_COMM_WORLD, &Nprocs);

/* find the unique identity of this process */ MPI_Comm_rank(MPI_COMM_WORLD, &procID);

/* insist that all processes have to go through this routine before the next commands */ info = MPI_Barrier(MPI_COMM_WORLD);

/* test a send and recv pair of operations */{ MPI_Status recvSTATUS; char *bufout = strdup("Hello"); int bufoutlength = strlen(bufout); int bufinlength = bufoutlength; /* for convenience */ char *bufin = (char*) calloc(bufinlength, sizeof(char));

int TOprocess = (Nprocs-1) - procID; int TOtag = 10000*procID + TOprocess;

int FROMprocess = (Nprocs-1) - procID; int FROMtag = 10000*FROMprocess + procID;

fprintf(stdout, "Sending: %s To process %d \n", bufout, TOprocess);

info = MPI_Send(bufout, bufoutlength, MPI_CHAR, TOprocess, TOtag, MPI_COMM_WORLD);

info = MPI_Recv(bufin, bufinlength, MPI_CHAR, FROMprocess, FROMtag, MPI_COMM_WORLD, &recvSTATUS); fprintf(stdout, "Received: %s\n From process: %d\n", bufin, FROMprocess); }

info = MPI_Finalize();

Results Viewed In Upshot

Click on “Setup”

The Main Upshot Viewer

This should appear after pressing “Setup”

Time History

The horizontal axis is physical time, running left to right

time

Time History

Each MPI call is color coded on each process

Pro

cess

Zoom in on Profile

1

2

(1) Process 1 sends message to process 4(2) Process 4 receives message from process 1

Zoom in on Profile

1

3

(1) Process 2 sends message to process 3(2) Process 3 receives message from process 2

Observations

Example 2Profiling MPI_Isend and MPI_Irecv

MPI_Request ISENDrequest; MPI_Status ISENDstatus; MPI_Request IRECVrequest; MPI_Status IRECVstatus;

char *bufout = strdup("Hello"); int bufoutlength = strlen(bufout); int bufinlength = bufoutlength; /* for convenience */ char *bufin = (char*) calloc(bufinlength, sizeof(char));

int TOprocess = (Nprocs-1) - procID; int TOtag = 10000*procID + TOprocess;

int FROMprocess = (Nprocs-1) - procID; int FROMtag = 10000*FROMprocess + procID;

fprintf(stdout, "Sending: %s To process %d \n", bufout, TOprocess);

info = MPI_Isend(bufout, bufoutlength, MPI_CHAR, TOprocess, TOtag, MPI_COMM_WORLD, &ISENDrequest);

info = MPI_Irecv(bufin, bufinlength, MPI_CHAR, FROMprocess, FROMtag, MPI_COMM_WORLD, &IRECVrequest);

fprintf(stdout, "Process %d just about to wait for requests to finish\n", procID);

MPI_Wait(&IRECVrequest, &IRECVstatus); MPI_Wait(&ISENDrequest, &ISENDstatus);

fprintf(stdout, "Received: %s\n From process: %d\n", bufin, FROMprocess);

Profile for Isend, Irecv, Wait sequence

Notice: before I called Wait the process could have done a bunch of operations, i.e. avoid all that wasted compute time while the message is in transit!

Notice that not much time is spent in Irecv

With Work Between (Isend,Irecv) and Wait

The neat point here is that while the message was in transitthe process could get on a do some computations…

Isends

Irecvs

Close up

Lab Activity

• We will continue with the parallel implementation of your card games

• Use upshot to profile your code’s parallel activity and include this in your presentations and reports

• Anyone ready to report yet?

Next Lecture

• Global MPI communication routines

• Building a simple finite element solver for Poisson’s equation

• Making the Poisson solver parallel …

top related