Calling sequences of MPI subroutines
rpn_comm.cfg | Use of domains and rpn_comm.cfg file | (français) |
RPN_COMM_adj_halo | Exchange of halos in adjoint mode | (français) |
RPN_COMM_adj_halox | Exchange of halos in adjoint mode | (français) |
RPN_COMM_allgather | Gathers data from all PEs and deliver to all | |
RPN_COMM_allreduce | Combines values from all PEs and distribute result to all | |
RPN_COMM_alltoall | Sends data from all to all PEs | |
RPN_COMM_barrier | Blocks all PEs to reach here before continuing | |
RPN_COMM_bcast | Broadcasts a message (all types) from one PE to all PEs | |
RPN_COMM_bcastc | Broadcasts a message (characters) from one PE to all PEs | |
RPN_COMM_bcst_world | Broadcasts a message (integer or float type) from PE 0 to all PEs | (français) |
RPN_COMM_bloc | Partition the domain into blocks of PEs | (français) |
RPN_COMM_carac | Give characteristics of the local PE | (français) |
RPN_COMM_coll | Gather data from all PEs (local tiles) to a global array on PE 0 | (français) |
RPN_COMM_comm | Returns the MPI id of the specified communicator | (français) |
RPN_COMM_compress | Compress an array by removing the halo | (français) |
RPN_COMM_defo | Set default "DEFO" communicator | (français) |
RPN_COMM_dist | Distribute global data from PE 0 into other PEs, thereby each subdomain is local to each PE. | (français) |
RPN_COMM_finalize | Ends the MPI mode | |
RPN_COMM_gather | Gathers together values from a group of processes | |
RPN_COMM_gatherv | Gathers together values of different sizes from a group of processes ( | |
RPN_COMM_globalsum | Global summation of a REAL*4 field, independant of the PE distribution | (français) |
RPN_COMM_grank | Checks if the local PE is a member of a given communicator | (français) |
RPN_COMM_init | Initializes to MPI mode | (français) |
RPN_COMM_limit | Generic algorithm for splitting a grid | (français) |
RPN_COMM_move | Friendly version of send/recv for moving data to a neighbor | |
RPN_COMM_mype | Return rank and position of the local PE in its domain | (français) |
RPN_COMM_petopo | Setup of the processes topology | (français) |
RPN_COMM_rank | Stub for MPI_Comm_rank | |
RPN_COMM_recv | Performs a basic receive | |
RPN_COMM_reduce | Reduces values on all processes to a single value | |
RPN_COMM_send | Performs a basic send | |
RPN_COMM_sendrecv | Combination of a send and a receive operation | |
RPN_COMM_size | Stub for MPI_Comm_size | |
RPN_COMM_split | Generic call for RPN_COMM_topo | (français) |
RPN_COMM_swapns | North/south exchange of data | (français) |
RPN_COMM_topo | Generate relative information about a local tile along a specified axis | (français) |
RPN_COMM_transpose | Changes the repartition (mapping) of the global domain on the local tiles | (français) |
RPN_COMM_transpose48 | Changes the repartition (mapping) of the global domain on the local tiles (real*4 to real*8) | (français) |
RPN_COMM_unit | Changes default output unit | (français) |
RPN_COMM_version | Returns the package version number | (français) |
RPN_COMM_xch_halo | Exchange the halos between north,south,east and west neighbors | (français) |
RPN_COMM_xch_halox | Exchange the halos like RPN_COMM_xch_halo but returns a global row containing all the local rows concatenated | (français) |