GROMACS version: 2021
GROMACS modification: No
Hello everyone,
I have started using GROMACS with CUDA and I simulated several protein-ligand complexes. The mdrun command line in the script file is as follows:
mpirun $GROMACS_DIR/bin/gmx_mpi mdrun -nb gpu -ntomp 4 -s md_0_100.tpr -deffnm md_0_100
And below is some sections of the log file:
Running on 1 node with total 80 cores, 80 logical cores, 1 compatible GPU
Hardware detected on host barbun144.yonetim (the node of MPI rank 0):
CPU info:
Vendor: Intel
Brand: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Family: 6 Model: 85 Stepping: 4
Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl avx512secondFMA clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
Number of AVX-512 FMA units: 2
Hardware topology: Only logical processor count
GPU info:
Number of GPUs detected: 1
#0: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat: compatible
I went through the GROMACS manual and tried some parameters with different values, but this combination was the most efficient I could find - or at least this was the first time I didn’t come across a warning saying that the way I run gmx may not be efficient.
Nevertheless, do you think this is the best combination of options I can ever use with this version that is supported with CUDA?
Thank you in advance,
Best regards,
Lalehan