How to use GROMACS wrapper with mpirun

GROMACS version: GROMCAS 2020.4 (CPU version, MPI-enabled)
GROMACS modification: No

I was trying to use Gromacs wrapper on a supercomputer node on Bridges-2. On the login node, where mpirun is not required, Gromacs wrapper worked just fine. It was just that the GROMACS commands were decorated with the suffix _mpi. (For example, instead of using gromacs.editconf, I had to use gromacs.editconf_mpi).

However, I found that on an interactive node, where mpirun -np xx is required when launch GROMACS commands, I had the following error when importing gromacs in a Python console:

[r488.ib.bridges2.psc.edu:62197] OPAL ERROR: Not initialized in file pmix2x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:
  version 16.05 or later: you can use SLURM's PMIx support. This
  requires that you configure and build SLURM --with-pmix.
  Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  install PMI-2. You must then build Open MPI using --with-pmi pointing
  to the SLURM PMI library location.
Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[r488.ib.bridges2.psc.edu:62197] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

This is the same error I would get if I use gmx_mpi instead of mpirun -np xx gmx_mpi on the interactive node. I guess the problem was that Gromacs wrapper was not aware of the MPI I was using, but I’m not sure how to deal with this problem.

In my case, I was trying to use commands of MPI-enabled (CPU version) GROMACS 2020-4 via Gromacs wrapper. To enable mpirun, I had to execute module load openmpi/3.1.6-gcc10.2.0. I wonder if it is possible to use GROMACS wrapper in general (not just MDrunner) with mpirun . Or did I miss something in the documentation? Any experience shared would be highly appreciated!

1 Like