MPI_ABORT causes Open MPI to kill all MPI processes in gmx_mpi command

GROMACS version: 2020.4
GROMACS modification: No
Here post your question:
Please guide me why I am facing this error: MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

Please help me resolve this issue. I am new to using gromacs on GPU systems.

I am doing MD simulation on Protein-ligand complex systems after docking.

There is a clear error printed by grompp that you need to solve. You’re calling something “ligand” somewhere, which is not a valid [moleculetype] name.

Did you get any solution for the error that you mentioned (regarding MPI_ABORT)? I am also facing the same issue.
Looking forward to your reply.

Thanks!

Hello @jalemkul, I am also facing the same problem regarding MPI_ABORT.
Looking forward to your help and suggestion.


@Puneet_IITR your issue is unrelated to this post so you should not ask here. MPI_ABORT is a generic failure message from MPI, the actual GROMACS error is clearly shown. Based on what is shown, probably you are out of disk space and mdrun cannot write the checkpoint file, so the run exits, triggering the MPI failure.