typed mpi - a data type tool for mpi nitin bahadur and florentina irina popovici { bnitin, pif
DESCRIPTION
Nitin Bahadur, Florentina Irina Popovici, Dec 1999 Objectives of our interface: Automatic generation of MPI datatypes Type checking and error reporting to both sender and receiver Seamless integration within a C program Facility to send user-defined datatypes such as structures Facility to send multiple data in one send call (allowing arbitrary grouping of data to be sent)TRANSCRIPT
Typed MPI - A Data Type Tool for MPI
Nitin Bahadur and
Florentina Irina Popovici{ bnitin, pif } @cs.wisc.edu
Nitin Bahadur, Florentina Irina Popovici, Dec 1999
Why a Data Type Tool for MPI?
No type message at receiver end User has to manually construct MPI data types corresponding to user defined
data types
struct Partstruct { int class; double d[6]; char b[7]; };struct Partstruct particle;/* build datatype describing first array entry */MPI_Datatype Particletype;MPI_Datatype type[3] = {MPI_INT, MPI_DOUBLE, MPI_CHAR};int block[3] = {1, 6, 7};MPI_Aint disp[3];MPI_Address( particle, disp);MPI_Address( particle.d, disp+1);MPI_Address( particle.b, disp+2);MPI_Type_struct( 3, block, disp, type, &Particletype);MPI_Type_commit( &Particletype);MPI_Send( MPI_BOTTOM, 1, Particletype, dest, tag, comm);
User code for constructing an MPI datatype for a C struct
Nitin Bahadur, Florentina Irina Popovici, Dec 1999
Objectives of our interface:
Automatic generation of MPI datatypes Type checking and error reporting to both sender and
receiver Seamless integration within a C program Facility to send user-defined datatypes such as
structures Facility to send multiple data in one send call (allowing
arbitrary grouping of data to be sent)
Nitin Bahadur, Florentina Irina Popovici, Dec 1999
Design Overview
Network Communication
1.User Application2. Wrapper call code3. Runtime Typed MPI4. MPI Internals
Structure of an application that uses Typed MPI (layers2 and 3 represent Typed MPI)
1.User Application2. Wrapper call code3. Runtime Typed MPI4. MPI Internals
Code generated at compile time. It will perform: • signature generation• data and signature packaging• signature checking (receiver end)• error reporting
Support for runtime
Format of calls for blocking Send / Receive: int MPI_Wrapper_Send (int dest, int tag, MPI_Comm comm, (void*) buf1, int count1,…);int MPI_Wrapper_Recv (int source, int tag, MPI_Comm comm, MPI_Status *status, (void*)buf1, int count1, ...);
Nitin Bahadur, Florentina Irina Popovici, Dec 1999
How does Typed MPI work ? Phase 1: preprocess the user files
determine the types of the wrapper call parameters
generate signature generate code for packaging data generate code for sending/receiving
data generate code for error
checking/reporting
Phase 2: runtime Execute the program, make the
actual MPI calls and do error checking and reporting
Sender
Construct signaturePack dataSend data(MPI_Send)Receive ack. fromreceiver
Receiver
Construct signatureReceive data(MPI_Recv)Unpack data
Compare receivedsignature with generatedoneSend data/report error toreceiver
Logical sequence of steps performed for completion of aMPI_Wrapper_Send-MPI_Wrapper_Recv operation
Time
Time
Nitin Bahadur, Florentina Irina Popovici, Dec 1999
Rules for type checking. Two types T1 and T2 are identical if:• T1 and T2 have the same primitive type (e.g. int, char )• T1 and T2 are arrays and they have the same base type and the same size
(int p[5] and char q[5] are different)• T1 and T2 are structures and their members have the same type
A Signature is • generated for each variable to be sent/received based on its type• sent by the sender along with the data• compared at the receiver’s end with the expected signature (an error code will be returned to the sender )
Send / Receive Protocol
Type Checking
Receive Success/Error
SenderReceiver
Send data
Code
Receive Data
Check Type
Return Error / SuccessCode
Nitin Bahadur, Florentina Irina Popovici, Dec 1999
Signatureencompasses entire type of a variable
struct example1 {int a;float b[5]; }
VAR_START
STRUCT_START
TYPE_INT
TYPE_INT TYPE_ARRAY
TYPE_FLOAT
1
1
STRUCT_START
5
struct eg1 { int a; struct eg2 { int b; float b[5] }}
• depth first approach to signature construction• type of a variable is broken down recursively to generate the signature
{ START, VAR_START, STRUCT_START,
TYPE_INT, 1 , TYPE_ARRAY, TYPE_FLOAT, 5,
ST_UN_END, END }
Recongnition of a complex C struct
Signature
Nitin Bahadur, Florentina Irina Popovici, Dec 1999
Preprocessing steps
/ * Initial user code */....MPI_Wrapper_Send (dest, tag, comm, &a, 1, &b, 2);....MPI_Wrapper_Recv (src, tag, comm, &status, &a, 1, &b, 2);...
/* Modified user source code */....MPI_Wrapper_Send79205397130821 (dest, tag, comm, &a, 1, &b, 2);....MPI_Wrapper_Recv79205397345131(src, tag, comm, &status, &a, 1, &b, 2);...
/* generated file 1 */int MPI_Wrapper_Send79205397130821 (int, int, MPI_Comm, void *, int, void * int) {/* signature and data packaging, error reporting */ }
/* generated file 2 */int MPI_Wrapper_Recv792053971345131 (int, int, MPI_Comm, MPI_Status *, void *, int, void *, int) { /* signature and data packaging, error reporting */ }
Preprocessor and generator Compilation
Files resulting after preprocessing and source code generation
Executable
A new file is generated for every MPI_Wrapper call and it contains code for that call
Nitin Bahadur, Florentina Irina Popovici, Dec 1999
Performance Evaluation
Ping Pong for blocking communication
1
10
100
1000
10000
50 100 200 400 800 1600 3200
Data size
Tim
e in
mic
rose
c
Ordinary MPI
Wrapper Calls
Ping Pong for non-blocking communication
1
10
100
1000
10000
100 200 400 800 1600 3200
Data size
Tim
e in
mic
rose
c
Ordinary MPI
Wrapper Calls
Operation Time (inmillisec)
Preprocessing (without call to generate the code) 120
Generation of code for Send / Receive of a primitive data 80
Generation of code for Send / Receive of a structure 90
Type Checking overhead
Preprocessing overhead •The graphs and figure show that our design is feasible for automatic type generation and type checking.
•The overhead of type checking gets overshadowed by data transmission time as data size increases.
Nitin Bahadur, Florentina Irina Popovici, Dec 1999
Conclusions
• A simple interface for sending user defined datatypes and multiple data in one call
• Strict type checking and error reporting to both sender and receiver
• Low preprocessing time and runtime overhead
• Error reporting to sender can be disabled to (if required) to reduce number of data transfers
Future Work
• Extensions for Group Communication and Vector operations
• Handling of expressions in MPI_Wrapper calls
• Feasibility study for use of hashing functions to reduce size of signature