Running in mutlinode with GPU

GROMACS version: 5.1.4
GROMACS modification: Yes/No
Here post your question Dear all,
I have a cluster with six nodes and in each node is with one k40 and two-processor of 12 core. So in total, I have 144 nodes. can anyone tell me how to run in such config?

When I give gmx_mpi mdrun -v -deffnm pabc, it gives following error,

Fatal error:
Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank. If you want to run with this setup, specify the -ntomp option. But we suggest to change the number of MPI ranks.

The corresponding PBS section is

#PBS -l select=6:ncpus=1:accelerator=True:accelerator_model=“Tesla_K40s”

Here is the full log

Command line:
gmx_mpi mdrun -v -deffnm peptide2_GO

Number of logical cores detected (24) does not match the number reported by OpenMP (1).
Consider setting the launch configuration manually!

Running on 2 nodes with total 24 cores, 48 logical cores, 2 compatible GPUs
Cores per node: 12
Logical cores per node: 24
Compatible GPUs per node: 1
All nodes have identical type(s) of GPUs
Hardware detected on host nid00078 (the node of MPI rank 0):
CPU info:
Vendor: GenuineIntel
Brand: Intel® Xeon® CPU E5-2695 v2 @ 2.40GHz
SIMD instructions most likely to fit this hardware: AVX_256
SIMD instructions selected at GROMACS compile time: AVX_256
GPU info:
Number of GPUs detected: 1
#0: NVIDIA Tesla K40s, compute cap.: 3.5, ECC: yes, stat: compatible

Reading file peptide2_GO.tpr, VERSION 5.1.5 (single precision)
Changing nstlist from 20 to 40, rlist from 1.224 to 1.283

The number of OpenMP threads was set by environment variable OMP_NUM_THREADS to 1
Using 2 MPI processes
Using 1 OpenMP thread per MPI process

On host nid00078 1 compatible GPU is present, with ID 0
On host nid00078 1 GPU auto-selected for this run.
Mapping of GPU ID to the 1 PP rank in this node: 0


Program gmx mdrun, VERSION 5.1.4
Source code file: /mnt/lustre/new_apps/cle7/gromacs/5.1.4/gpu/10.1/tar/gromacs-5.1.4/src/programs/mdrun/resource-division.cpp, line: 529

Fatal error:
Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank. If you want to run with this setup, specify the -ntomp option. But we suggest to change the number of MPI ranks.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

Halting parallel program gmx mdrun on rank 0 out of 2
Rank 0 [Sat Oct 10 13:43:28 2020] [c0-0c1s3n2] application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0


Program gmx mdrun, VERSION 5.1.4
Source code file: /mnt/lustre/new_apps/cle7/gromacs/5.1.4/gpu/10.1/tar/gromacs-5.1.4/src/programs/mdrun/resource-division.cpp, line: 529

Fatal error:
Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank. If you want to run with this setup, specify the -ntomp option. But we suggest to change the number of MPI ranks.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

Halting parallel program gmx mdrun on rank 1 out of 2
Rank 1 [Sat Oct 10 13:43:28 2020] [c0-0c1s3n3] application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
_pmiu_daemon(SIGCHLD): [NID 00078] [c0-0c1s3n2] [Sat Oct 10 13:43:28 2020] PE RANK 0 exit signal Aborted
[NID 00078] 2020-10-10 13:43:28 Apid 580545: initiated application termination

First and foremost, I suggest to update to a more recent release. 5.1 is more than five years old and it is also unsupported.

To your question, I suggest to try running 2-4 ranks per node and given the type of hardware and its CPU–GPU balance you should try offloading only nonbonded and nonbonded + bonded interactions.

I recommend checking the documentation, in particular:


and specifically the examples here should help: