GROMACS - gpu is not detected

GROMACS version: 2021
GROMACS modification: Yes/No
Why gromacs does not detect my GPU ?

Dear,

I am a new user of molecular dynamics simulation and bioinformatics. Two days ago I have installed gromacs following the instruction on the website. Here my gmx features :
GROMACS version: 2021
Verified release checksum is 3e06a5865d6ff726fc417dea8d55afd37ac3cbb94c02c54c76d7a881c49c5dd8
Precision: mixed
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: CUDA
SIMD instructions: AVX2_256
FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 9.3.0
C compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 9.3.0
C++ compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA ® Cuda compiler driver;Copyright © 2005-2020 NVIDIA Corporation;Built on Mon_Nov_30_19:08:53_PST_2020;Cuda compilation tools, release 11.2, V11.2.67;Build cuda_11.2.r11.2/compiler.29373293_0
CUDA compiler flags:-std=c++17;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-Wno-deprecated-gpu-targets;-gencode;arch=compute_35,code=compute_35;-gencode;arch=compute_50,code=compute_50;-gencode;arch=compute_52,code=compute_52;-gencode;arch=compute_60,code=compute_60;-gencode;arch=compute_61,code=compute_61;-gencode;arch=compute_70,code=compute_70;-gencode;arch=compute_75,code=compute_75;-gencode;arch=compute_80,code=compute_80;-use_fast_math;-D_FORCE_INLINES;-mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver: 11.20
CUDA runtime: N/A

But, when I try to run a molecular dynamics simulation using my GPU for the non-bonded, this error message is prompted " Cannot run short-ranged nonbonded interactions on a GPU because no GPU is detected.".

Can someone help me to fix that ?

  1. Is your computer detected GPU?
  2. How did you install GROMACS? Installation guide — GROMACS 2021-beta1-UNCHECKED documentation
    In this part “cmake … -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON” you must add -DGMX_GPU=CUDA, so in your terminal you must write
    cmake … -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=CUDA
  3. It’s my advice. Don’t install the newest version of GROMCS. It can have some undetected errors. Choose version which is minimum six months after release. In addition GROMACS 2021 is a beta version, so it is a little bit risky.
1 Like

That is a sign that your CUDA runtime is either not installed correctly or is not compatible with your driver. Is there any other detection error in the log?

1 Like

That is useful, but the above binary is already built with CUDA support (see GPU support: CUDA above), so it will not address the issue.

2021 stable was released two weeks ago: GROMACS 2021 official release

1 Like

Thank to JakubH and you for the reply.

I think the best solution for me is to install the real Ubuntu system on my workstation. I am using WSL 2 and I cannot install correctly Cuda and the cuda toolkit because I have to join “Windows Insider Program” but my system refuse to join it. I hope this will help :).

1 Like

OK, WSL might explain the issues. I have little experience with WSL + CUDA, but in principle it sohuld work. Here is the NVIDIA documentation: CUDA on WSL :: CUDA Toolkit Documentation

Unlikely, but you could check whether you can run a CUDA samples program (as suggested here). If that works, but GROMACS does not, that’s something we should try to fix.

Let us know if you succeed, it would be good to know if GROMACS works smoothly on WSL2 with CUDA.

Cheers,
Szilárd

Hi,

I have the same issue with CUDA runtime displaying N/A. Any hints on how to correct this?

Thanks,
An

I am also encountering the same problem with CUDA runtime displaying N/A in GROMACS 2021. The features are reported as follows:

GROMACS version:    2021
Verified release checksum is 3e06a5865d6ff726fc417dea8d55afd37ac3cbb94c02c54c76d7a881c49c5dd8
Precision:          mixed
Memory model:       64 bit
MPI library:        MPI
OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:        CUDA
SIMD instructions:  AVX2_256
FFT library:        fftw-3.3.8-sse2-avx2-avx2_128
RDTSCP usage:       enabled
TNG support:        enabled
Hwloc support:      disabled
Tracing support:    disabled
C compiler:         /software/apps/spack/a1/linux-centos7-x86_64/gcc-7.4.0/openmpi-3.1.4-tnwn4bppp3ju5hgzzsxatqmca5e2y37z/bin/mpicc GNU 7.4.0
C compiler flags:   -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler:       /software/apps/spack/a1/linux-centos7-x86_64/gcc-7.4.0/openmpi-3.1.4-tnwn4bppp3ju5hgzzsxatqmca5e2y37z/bin/mpic++ GNU 7.4.0
C++ compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler:      /software/apps/spack/a1/linux-centos7-x86_64/gcc-7.4.0/cuda-9.2.88-2xswg4vvkmx2ejx5himkniauy3tslp3k/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on Wed_Apr_11_23:16:29_CDT_2018;Cuda compilation tools, release 9.2, V9.2.88
CUDA compiler flags:-std=c++14;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_37,code=compute_37;-use_fast_math;;-mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver:        9.20
CUDA runtime:       N/A

If I run ldd on my gmx_api binary, I find the following:

libcudart.so.9.2 => /software/apps/spack/a1/linux-centos7-x86_64/gcc-7.4.0/cuda-9.2.88-2xswg4vvkmx2ejx5himkniauy3tslp3k/lib64/libcudart.so.9.2 (0x00002ad40ffa2000)
libcufft.so.9.2 => /software/apps/spack/a1/linux-centos7-x86_64/gcc-7.4.0/cuda-9.2.88-2xswg4vvkmx2ejx5himkniauy3tslp3k/lib64/libcufft.so.9.2 (0x00002ad41020c000)

Moreover, if I check the symbols offered by libcudart.so.9.2, using nm -gDC, - both cudaDriverGetVersion and cudaRuntimeGetVersion are offered.

I compiled a short *.cu script as suggested in a previous post about GROMACS 2020.4:

#include <cuda.h>
#include <stdio.h>

int main(int argc, char** argv) {
int cuda_driver = 0, cuda_runtime = 0;

if (cudaDriverGetVersion(&cuda_driver) != cudaSuccess)
    {
        printf("N/A");
    }

printf("%d.%d\n", cuda_driver / 1000, cuda_driver % 100);

if (cudaRuntimeGetVersion(&cuda_runtime) != cudaSuccess)
    {
        printf("N/A");
    }

printf("%d.%d\n", cuda_runtime / 1000, cuda_runtime % 100);

return 0;
}

The result of this script on the compilation node results in the following:

10.20
9.20

Any advice or suggestions to troubleshoot would be appreciated.

*Update:

I thought it would be worth mentioning a few additional observations for anyone else encountering similar issues:

  1. I am unable to successfully compile an MPI version of GROMACS 2021 without using mpicc/mpic++. Specifically, the code associated with gmxapi in GROMACS 2021 does not compile without mpicc/mpic++. I tried using multiple versions of gcc/g++ between 7.4 - 9.2 and got a compilation error every time. I tried different versions of CUDA (e.g. 9.2, 10.1) together with mpicc/mpic++, which results in a compilation without any build errors (e.g., as above), but with an incorrect CUDA driver and runtime detection.

  2. I am able to successfully build a non-MPI gmxapi-enabled version of GROMACS 2021 (-DGMX_MPI=off) with gcc/g++ instead of mpicc/mpic++. This version of GROMACS 2021 correctly identifies the CUDA driver and runtime versions (as shown below; I also used an updated CUDA version here).

GROMACS version:    2021
Verified release checksum is 3e06a5865d6ff726fc417dea8d55afd37ac3cbb94c02c54c76d7a881c49c5dd8
Precision:          mixed
Memory model:       64 bit
MPI library:        thread_mpi
OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:        CUDA
SIMD instructions:  AVX2_256
FFT library:        fftw-3.3.8-sse2-avx2-avx2_128
RDTSCP usage:       enabled
TNG support:        enabled
Hwloc support:      disabled
Tracing support:    disabled
C compiler:         /software/apps/spack/a1/linux-centos7-x86_64/gcc-4.8.5/gcc-7.4.0-4kdemuwlzds2ofpkkz7yytgi7kyojuvz/bin/gcc GNU 7.4.0
C compiler flags:   -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler:       /software/apps/spack/a1/linux-centos7-x86_64/gcc-4.8.5/gcc-7.4.0-4kdemuwlzds2ofpkkz7yytgi7kyojuvz/bin/g++ GNU 7.4.0
C++ compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler:      /software/apps/spack/a02/linux-centos7-haswell/gcc-7.4.0/cuda-10.2.89-ggwihui3ko3ewjmhsqb5dtghageu3cjo/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on Wed_Oct_23_19:24:38_PDT_2019;Cuda compilation tools, release 10.2, V10.2.89
CUDA compiler flags:-std=c++14;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_37,code=compute_37;-use_fast_math;;-mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver:        10.20
CUDA runtime:       10.20
  1. I tried to compile GROMACS 2020.6 with the same configuration as the failed build above, using mpicc/mpic++. The resulting CUDA driver and runtime output is the same (9.20, N/A). However, with GROMACS 2020.6, I found that gmxapi can be compiled without error using gcc/g++. Doing so results in a successful MPI build of GROMACS 2020.6 for me with CUDA driver and runtime correctly detected (as shown below).
GROMACS version:    2020.6
Verified release checksum is 2f568d8884e039acbc6b68722432516e0628be00c847969b7c905c8b53ef826f
Precision:          single
Memory model:       64 bit
MPI library:        MPI
OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:        CUDA
SIMD instructions:  AVX2_256
FFT library:        fftw-3.3.8-sse2-avx2-avx2_128
RDTSCP usage:       enabled
TNG support:        enabled
Hwloc support:      hwloc-1.11.8
Tracing support:    disabled
C compiler:         /software/apps/compilers/gcc/6.4.0/bin/gcc GNU 6.4.0
C compiler flags:   -mavx2 -mfma -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler:       /software/apps/compilers/gcc/6.4.0/bin/g++ GNU 6.4.0
C++ compiler flags: -mavx2 -mfma -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler:      /software/apps/cuda/9.2/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on Tue_Jun_12_23:07:04_CDT_2018;Cuda compilation tools, release 9.2, V9.2.148
CUDA compiler flags:-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_37,code=compute_37;-use_fast_math;;-mavx2 -mfma -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver:        10.20
CUDA runtime:       9.20

I was able to resolve my issue. I tracked it to a faulty MPI linking in the module-based software that was used in the compilation.

I first revealed how mpicc and mpic++ was wrapping gcc/g++ using the -showme flag. I then compiled with gcc/g++ and explicitly added the linked include and lib folders of the corrected MPI path together with -lmpi and -lmpi_cxx flags via the C_FLAGS and CXX_FLAGS to CMake.

The result is an MPI-enabled compilation with CUDA driver and runtime correctly detected.

GROMACS version:    2021
Verified release checksum is 3e06a5865d6ff726fc417dea8d55afd37ac3cbb94c02c54c76d7a881c49c5dd8
Precision:          mixed
Memory model:       64 bit
MPI library:        MPI
OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:        CUDA
SIMD instructions:  AVX2_256
FFT library:        fftw-3.3.8-sse2-avx2-avx2_128
RDTSCP usage:       enabled
TNG support:        enabled
Hwloc support:      disabled
Tracing support:    disabled
C compiler:         /software/apps/spack/a1/linux-centos7-x86_64/gcc-4.8.5/gcc-7.4.0-4kdemuwlzds2ofpkkz7yytgi7kyojuvz/bin/gcc GNU 7.4.0
C compiler flags:   -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler:       /software/apps/spack/a1/linux-centos7-x86_64/gcc-4.8.5/gcc-7.4.0-4kdemuwlzds2ofpkkz7yytgi7kyojuvz/bin/g++ GNU 7.4.0
C++ compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler:      /software/apps/spack/a02/linux-centos7-haswell/gcc-7.4.0/cuda-10.2.89-ggwihui3ko3ewjmhsqb5dtghageu3cjo/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on Wed_Oct_23_19:24:38_PDT_2019;Cuda compilation tools, release 10.2, V10.2.89
CUDA compiler flags:-std=c++14;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_37,code=compute_37;-use_fast_math;;-mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver:        10.20
CUDA runtime:       10.20