MPI_Finalize() does not end any processes

11,338

Solution 1

This is just undefined behavior.

The number of processes running after this routine is called is undefined; it is best not to perform much more than a return rc after calling MPI_Finalize.

http://www.mpich.org/static/docs/v3.1/www3/MPI_Finalize.html

Solution 2

The MPI standard only requires that rank 0 return from MPI_FINALIZE. I won't copy the entire text here because it's rather lengthy, but you can find it in the version 3.0 of the standard (the latest for a few more days) in Chapter 8, section 8.7 (Startup) on page 359 - 361. Here's the most relevant parts:

Although it is not required that all processes return from MPI_FINALIZE, it is required that at least process 0 in MPI_COMM_WORLD return, so that users can know that the MPI portion of the computation is over. In addition, in a POSIX environment, users may desire to supply an exit code for each process that returns from MPI_FINALIZE.

There's even an example that's trying to do exactly what you said:

Example 8.10 The following illustrates the use of requiring that at least one process return and that it be known that process 0 is one of the processes that return. One wants code like the following to work no matter how many processes return.

...  
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
...
MPI_Finalize();
if (myrank == 0) {
    resultfile = fopen("outfile","w");
    dump_results(resultfile);
    fclose(resultfile);
} exit(0);

The MPI standard doesn't say anything else about the behavior of an application after calling MPI_FINALIZE. All this function is required to do is clean up internal MPI state, complete communication operations, etc. While it's certainly possible (and allowed) for MPI to kill the other ranks of the application after a call to MPI_FINALIZE, in practice, that is almost never the way that it is done. There's probably a counter example, but I'm not aware of it.

Solution 3

When I started MPI, I had same problem with MPI_Init and MPI_Finalize methods. I thought between these functions work parallel and outside work serial. Finally I saw this answer and I figured its functionality out.

J Teller's answer: https://stackoverflow.com/a/2290951/893863

int main(int argc, char *argv[]) {
    MPI_Init(&argc, &argv);  
    MPI_Comm_size(MPI_COMM_WORLD,&numprocs);  
    MPI_Comm_rank(MPI_COMM_WORLD,&myid);

    if (myid == 0) { // Do the serial part on a single MPI thread
        printf("Performing serial computation on cpu %d\n", myid);
        PreParallelWork();
    }

    ParallelWork();  // Every MPI thread will run the parallel work

    if (myid == 0) { // Do the final serial part on a single MPI thread
        printf("Performing the final serial computation on cpu %d\n", myid);
        PostParallelWork();
    }

    MPI_Finalize();  
    return 0;  
}  
Share:
11,338
MikkelSecher
Author by

MikkelSecher

Fresh out of Uni with as masters in Computer Science, and have recently started working as a ASP.NET developer at a Danish E-commerce company. I'm hoping to be active on Stack Overflow and improve my professional skills in this way.

Updated on June 04, 2022

Comments

  • MikkelSecher
    MikkelSecher almost 2 years

    I'm messing around with openMPI, and I have a wierd bug.

    It seems, that even after MPI_Finalize(), each of the threads keeps running. I have followed a guide for a simple Hello World program, and it looks like this:

    #include <mpi.h>;
    
    int main(int argc, char** argv) {
    
    // Initialize the MPI environment
        MPI_Init(NULL, NULL);
    
    // Get the number of processes
        int world_size;
        MPI_Comm_size(MPI_COMM_WORLD, &world_size);
    
    // Get the rank of the process
        int world_rank;
        MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    
    // Get the name of the processor
        char processor_name[MPI_MAX_PROCESSOR_NAME];
        int name_len;
        MPI_Get_processor_name(processor_name, &name_len);
    
    // Print off a hello world message
           printf("Hello world from processor %s, rank %d"
           " out of %d processors\n",
           processor_name, world_rank, world_size);
    
    // Finalize the MPI environment.
        MPI_Finalize();
    
        printf("This is after finalize");
    }
    

    Notice the last printf()... This should only be printed once, since the parallel part is finalized, right?!

    However, the output from this program if i for example run it with 6 processors is:

    mpirun -np 6 ./hello_world
    
    Hello world from processor ubuntu, rank 2 out of 6 processors
    Hello world from processor ubuntu, rank 1 out of 6 processors
    Hello world from processor ubuntu, rank 3 out of 6 processors
    Hello world from processor ubuntu, rank 0 out of 6 processors
    Hello world from processor ubuntu, rank 4 out of 6 processors
    Hello world from processor ubuntu, rank 5 out of 6 processors
    This is after finalize...
    This is after finalize...
    This is after finalize...
    This is after finalize...
    This is after finalize...
    This is after finalize...
    

    Am I misunderstanding how MPI works? Should each thread/process not be stopped by the finalize?