当前位置:网站首页>Asynchronous sending and receiving of message passing based concurrent programming (MPI)

Asynchronous sending and receiving of message passing based concurrent programming (MPI)

2022-06-22 07:46:00 Leon_ George

4.2 MPI Asynchronous communication ( Non blocking transceiver )

  • Synchronous communication : When a message sending operation is completed ( The message recipient has received the message ) when , It is called synchronous transmission ; Empathy , When a message receiving operation is completely completed ( The message recipient has received the message ) when , It is called synchronous reception . As mentioned above MPI_Send and MPI_Recv It is the synchronous transceiver function , Or rather, Blocked transceiver function .

  • asynchronous communication : When a message sending operation is partially completed ( Whether or not the message recipient has received the message , I just return immediately ) when , It is called asynchronous sending ; Empathy , When a message receiving operation is partially completed ( Whether a message is received or not , I just return immediately ) when , It is called asynchronous receiving . To be mentioned later MPI_Isend and MPI_Irecv It is an asynchronous transceiver function , Or non blocking transceiver function .

    Function name describe
    MPI_Isend(void *buff,
    int count,
    MPI_Datatype type,
    int dest,
    int tag,
    int comm,
    MPI_Request *request )
    Send a peer-to-peer asynchronous message to the communication group comm Medium dest process ;
    This function will return immediately after being called , And pass request The output parameter variable returns the message sending status .
    It should be noted that , Before confirming that the message has been completely sent , Do not change the message cache buff The content in .
    MPI_Recv(void *buff,
    int count,
    MPI_Datatype type,
    int source,
    int tag,
    int comm,
    MPI_Request *request )
    From communication group comm Medium source The process receives a peer-to-peer asynchronous message to buff in ;
    This function will return immediately after being called , And pass request The output parameter variable returns the message sending status .
    It should be noted that , Before confirming that the message has been fully received , Do not change the message cache buff The content in .
    • When you start an asynchronous receive / send operation , You cannot use the data in the cache for the time being , Until the receiving / sending operation is completed ! How do I know if the receiving and sending is over ? Look at the following two functions :

      function describe
      MPI_Test(MPI_Request *request,
      int *flag,
      MPI_Status * status)
      This function is nonblocking ;
      It depends on the input parameters request, Returns the status of the asynchronous receive / send operation ;
      Its output parameter variable flag There are several values :
      0: Indicates that the asynchronous receive / send operation has not been completed ;
      Not 0: Indicates that the asynchronous receive / send operation has been completed , And status Contains information about the message (status.MPI_TAG、status.MPI_SOURCE)
      MPI_Wait(MPI_Request *request,
      MPI_Status * status)
      This function blocks waiting requests request( Associated with an asynchronous transceiver operation ) complete ; Return value status ditto .
  • Sync ( Blocking ) Method to solve the most valuable program in the array

    • Code example :

      #include "mpi.h"
      #include <stdio.h>
      #include <stdlib.h>
      #define MAX 10
      int num_procs;
      
      double x[MAX] = {
              1,3,56,24,54,54,35,245,23,52};                  // Input array
      
      int main(int argc, char *argv[])
      {
              
         int i,start, stop;
         int myid;
         double my_min, others_min;       // Minimum
         double my_max, others_max;       // Maximum
          MPI_Status st;
      
         MPI_Init(&argc,&argv);                      // Initialize
      
         MPI_Comm_size(MPI_COMM_WORLD, &num_procs);  // Get # processors
         MPI_Comm_rank(MPI_COMM_WORLD, &myid);       // Get my rank (id)
      
          /* -------------------------------------- * Find the min. among my numbers * -------------------------------------- */
          int n = MAX/num_procs;
      
          start = myid * n;                          // The starting position of an array of concurrent process subtasks 
      
          if ( myid != (num_procs-1) )              // The last process may not be assigned to n Elements 
          {
              
              stop = start + n;
          }
          else
          {
              
              stop = MAX;
          }
      
          my_min = x[start];                          // Assume that the starting position is the minimum 
      
          for (i = start+1; i < stop; i = i + 1 )
          {
              
              if ( x[i] < my_min )
                  my_min = x[i];
          }
      
          if ( myid == 0 )
          {
              
                 /* ------------------------------------- Get the min from others and compare ------------------------------------- */
              for (i = 1; i < num_procs; i++)
              {
              
                  MPI_Recv(&others_min, 1, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &st);
                  if ( others_min < my_min )
                      my_min = others_min;
              }
              printf("The min number:%f\n",my_min);
          }
          else
          {
              
              MPI_Send(&my_min, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
          }
      
      
          /* -------------------------------------- Now find the max. among my numbers -------------------------------------- */
          my_max = x[start];
      
          for (i = start+1; i < stop; i = i + 1 )
          {
              
              if ( x[i] > my_max )
                  my_max = x[i];
          }
      
          if ( myid == 0 )
          {
              
             /* ------------------------------------- Get the max from others and compare ------------------------------------- */
              for (i = 1; i < num_procs; i++)
              {
              
                  MPI_Recv(&others_max, 1, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &st);
                  if ( others_max > my_max )
                      my_max = others_max;
              }
              printf("The max number:%f\n",my_max);
          }
          else
          {
              
              MPI_Send(&my_max, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
          }
      
          MPI_Finalize();
      }
      
    • In this code , Because the transceiver function is blocked ( Sync ) Of , Therefore, when the process performs the receive / send operation after the process calculation , You must wait for the completion of receiving and sending before proceeding to the next operation , This is undoubtedly a waste of valuable time . The best way is to synchronize at the end of the program . This has to rely on the asynchronous processing mechanism described below .

    • However, when using asynchronous communication, it is necessary to ensure that each asynchronous transceiver , Must use different caches , To prevent the buffer from being polluted before the receive / send operation is completed .

  • asynchronous ( Non blocking ) Method to solve the most valuable program in the array

    • Code example :

      #include "mpi.h"
      #include <stdio.h>
      #include <stdlib.h>
      #define MAX 10
      int num_procs;
      
      double x[MAX] = {
              1,3,56,24,54,54,35,245,23,52};                  // Input array
      
      int main(int argc, char *argv[])
      {
              
         int i,start, stop;
         int myid;
         double my_min;       // Minimum
         double others_min[100];          // Save minimum separately
         double my_max;       // Maximum
         double others_max[100];          // Save maximum separately
      
         MPI_Status st;
         MPI_Request rq_min[100], rq_max[100];  // Status variables
      
         MPI_Init(&argc,&argv);                      // Initialize
         MPI_Comm_size(MPI_COMM_WORLD, &num_procs);  // Get # processors
         MPI_Comm_rank(MPI_COMM_WORLD, &myid);       // Get my rank (id)
      
          /* -------------------------------------- Find the min. among my numbers -------------------------------------- */
          int n = MAX/num_procs;
      
          start = myid * n;                          // The starting position of an array of concurrent process subtasks 
      
          if ( myid != (num_procs-1) )              // The last process may not be assigned to n Elements 
          {
              
              stop = start + n;
          }
          else
          {
              
              stop = MAX;
          }
      
          my_min = x[start];                          // Assume that the starting position is the minimum 
      
          for (i = start+1; i < stop; i = i + 1 )
          {
              
              if ( x[i] < my_min )
                  my_min = x[i];
          }
          if ( myid == 0 )
          {
              
              /* ------------------------------------- Get the min from others and compare ------------------------------------- */
              for (i = 1; i < num_procs; i++)
              {
              
                  MPI_Irecv(&others_min[i], 1, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &rq_min[i]);
              }
          }
          else
          {
              
              MPI_Isend(&my_min, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD,&rq_min[0]);
          }
      
      
          /* -------------------------------------- Now find the max. among my numbers -------------------------------------- */
          my_max = x[start];
      
          for (i = start+1; i < stop; i = i + 1 )
          {
              
              if ( x[i] > my_max )
                  my_max = x[i];
          }
      
          if ( myid == 0 )
          {
              
             /* ------------------------------------- Get the max from others and compare ------------------------------------- */
              for (i = 1; i < num_procs; i++)
              {
              
                  MPI_Irecv(&others_max[i], 1, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &rq_max[i]);
              }
          }
          else
          {
              
              MPI_Isend(&my_max, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD,&rq_max[0]);
          }
          /* -------------------------------------- Now synchronize to compute results -------------------------------------- */
          if ( myid == 0 )
          {
              
              for ( i = 1; i < num_procs; i++)
                  {
              
                  MPI_Wait( &rq_min[i], &st );
      
                  if ( others_min[i] < my_min )
                      my_min = others_min[i];
              }
      
              for ( i = 1; i < num_procs; i++)
              {
              
                  MPI_Wait( &rq_max[i], &st );
      
                  if ( others_max[i] > my_max )
                      my_max = others_max[i];
              }
      
              printf("min = %f\n",my_min);
              printf("max = %f\n",my_max);
          }
          else
          {
                // The other processes must wait until their messages
            // has been received before exiting !!!
              MPI_Wait( &rq_min[0], &st );
              MPI_Wait( &rq_max[0], &st );
          }
      
          MPI_Finalize();
      }
      
原网站

版权声明
本文为[Leon_ George]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/173/202206220736003873.html