I have also been trying to install GROMACS, using the WSL2. I have also followed the instructions you present in your comment but GROMACS cannot find my GPU. I have tested the connection with the GPU with CUDA tests and everything seems fine.
I also wanted to know if there’s no support or I have to do as I’m working with a virtual machine
There’s my log:
user@user:/mnt/c/Users/user/Downloads/gromacs-2020.4/build$ CC=gcc CXX=g++ cmake … -DGMX_OPENMP=ON -DGMX_GPU=ON -DGPU_DEPLOYMENT_KIT_ROOT_DIR=/usr/local/cuda-11.0/bin/ -DGMX_BUILD_OWN_FFTW=ON -DGMX_PREFER_STATIC_LIBS=ON -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=:/usr/local/
– The C compiler identification is GNU 7.5.0
– The CXX compiler identification is GNU 7.5.0
– Check for working C compiler: /usr/bin/gcc
– Check for working C compiler: /usr/bin/gcc – works
– Detecting C compiler ABI info
– Detecting C compiler ABI info - done
– Detecting C compile features
– Detecting C compile features - done
– Check for working CXX compiler: /usr/bin/g++
– Check for working CXX compiler: /usr/bin/g++ – works
– Detecting CXX compiler ABI info
– Detecting CXX compiler ABI info - done
– Detecting CXX compile features
– Detecting CXX compile features - done – Looking for NVIDIA GPUs present in the system – Could not detect NVIDIA GPUs
– Looking for pthread.h
– Looking for pthread.h - found
– Looking for pthread_create
– Looking for pthread_create - not found
– Looking for pthread_create in pthreads
– Looking for pthread_create in pthreads - not found
– Looking for pthread_create in pthread
– Looking for pthread_create in pthread - found
– Found Threads: TRUE
– Found CUDA: /usr/local/cuda (found suitable version “11.0”, minimum required is “9.0”)
– Found OpenMP_C: -fopenmp (found version “4.5”)
– Found OpenMP_CXX: -fopenmp (found version “4.5”)
– Found OpenMP: TRUE (found version “4.5”)
– Performing Test CFLAGS_EXCESS_PREC
– Performing Test CFLAGS_EXCESS_PREC - Success
– Performing Test CFLAGS_COPT
– Performing Test CFLAGS_COPT - Success
– Performing Test CFLAGS_NOINLINE
– Performing Test CFLAGS_NOINLINE - Success
– Performing Test CXXFLAGS_EXCESS_PREC
– Performing Test CXXFLAGS_EXCESS_PREC - Success
– Performing Test CXXFLAGS_COPT
– Performing Test CXXFLAGS_COPT - Success
– Performing Test CXXFLAGS_NOINLINE
– Performing Test CXXFLAGS_NOINLINE - Success
– Looking for include file unistd.h
– Looking for include file unistd.h - found
– Looking for include file pwd.h
– Looking for include file pwd.h - found
– Looking for include file dirent.h
– Looking for include file dirent.h - found
– Looking for include file time.h
– Looking for include file time.h - found
– Looking for include file sys/time.h
– Looking for include file sys/time.h - found
– Looking for include file io.h
– Looking for include file io.h - not found
– Looking for include file sched.h
– Looking for include file sched.h - found
– Looking for include file xmmintrin.h
– Looking for include file xmmintrin.h - found
– Looking for gettimeofday
– Looking for gettimeofday - found
– Looking for sysconf
– Looking for sysconf - found
– Looking for nice
– Looking for nice - found
– Looking for fsync
– Looking for fsync - found
– Looking for _fileno
– Looking for _fileno - not found
– Looking for fileno
– Looking for fileno - found
– Looking for _commit
– Looking for _commit - not found
– Looking for sigaction
– Looking for sigaction - found
– Performing Test HAVE_BUILTIN_CLZ
– Performing Test HAVE_BUILTIN_CLZ - Success
– Performing Test HAVE_BUILTIN_CLZLL
– Performing Test HAVE_BUILTIN_CLZLL - Success
– Looking for clock_gettime in rt
– Looking for clock_gettime in rt - found
– Looking for feenableexcept in m
– Looking for feenableexcept in m - found
– Looking for fedisableexcept in m
– Looking for fedisableexcept in m - found
– Checking for sched.h GNU affinity API
– Performing Test sched_affinity_compile
– Performing Test sched_affinity_compile - Success
– Looking for include file mm_malloc.h
– Looking for include file mm_malloc.h - found
– Looking for include file malloc.h
– Looking for include file malloc.h - found
– Checking for _mm_malloc()
– Checking for _mm_malloc() - supported
– Looking for posix_memalign
– Looking for posix_memalign - found
– Looking for memalign
– Looking for memalign - not found
– Check if the system is big endian
– Searching 16 bit integer
– Looking for sys/types.h
– Looking for sys/types.h - found
– Looking for stdint.h
– Looking for stdint.h - found
– Looking for stddef.h
– Looking for stddef.h - found
– Check size of unsigned short
– Check size of unsigned short - done
– Using unsigned short
– Check if the system is big endian - little endian
Searching for static libraries requested, so the GROMACS libraries will also be static (BUILD_SHARED_LIBS=OFF)
– Looking for HWLOC
– Looking for hwloc – hwloc.h not found
– Looking for hwloc – lib hwloc not found
– Could NOT find HWLOC (missing: HWLOC_LIBRARIES HWLOC_INCLUDE_DIRS) (Required is at least version “1.5”)
– Looking for C++ include pthread.h
– Looking for C++ include pthread.h - found
– Atomic operations found
– Performing Test PTHREAD_SETAFFINITY
– Performing Test PTHREAD_SETAFFINITY - Success
– Adding work-around for issue compiling CUDA code with glibc 2.23 string.h
– Check for working NVCC/C++ compiler combination with nvcc ‘/usr/local/cuda/bin/nvcc’
– Check for working NVCC/C++ compiler combination - works
– Checking for GCC x86 inline asm
– Checking for GCC x86 inline asm - supported
– Detected build CPU vendor - AMD
– Detected build CPU brand - AMD Ryzen 5 2400G with Radeon Vega Graphics
– Detected build CPU family - 23
– Detected build CPU model - 17
– Detected build CPU stepping - 0
– Detected build CPU features - aes amd apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt lahf misalignsse mmx msr pclmuldq pdpe1gb popcnt pse rdrnd rdtscp sha sse2 sse3 sse4a sse4.1 sse4.2 ssse3
– Enabling RDTSCP support - detected on the build host
– Checking for 64-bit off_t
– Checking for 64-bit off_t - present
– Checking for fseeko/ftello
– Checking for fseeko/ftello - present
– Checking for SIGUSR1
– Checking for SIGUSR1 - found
– Checking for pipe support
– Checking for system XDR support
– Checking for system XDR support - present
– Detecting best SIMD instructions for this CPU
– Detected best SIMD instructions for this CPU - AVX2_128
– Performing Test C_mavx2_mfma_FLAG_ACCEPTED
– Performing Test C_mavx2_mfma_FLAG_ACCEPTED - Success
– Performing Test C_mavx2_mfma_COMPILE_WORKS
– Performing Test C_mavx2_mfma_COMPILE_WORKS - Success
– Performing Test CXX_mavx2_mfma_FLAG_ACCEPTED
– Performing Test CXX_mavx2_mfma_FLAG_ACCEPTED - Success
– Performing Test CXX_mavx2_mfma_COMPILE_WORKS
– Performing Test CXX_mavx2_mfma_COMPILE_WORKS - Success
– Enabling 128-bit AVX2 SIMD instructions using CXX flags: -mavx2 -mfma
– Detecting flags to enable runtime detection of AVX-512 units on newer CPUs
– Performing Test C_xCORE_AVX512_qopt_zmm_usage_high_FLAG_ACCEPTED
– Performing Test C_xCORE_AVX512_qopt_zmm_usage_high_FLAG_ACCEPTED - Failed
– Performing Test C_xCORE_AVX512_FLAG_ACCEPTED
– Performing Test C_xCORE_AVX512_FLAG_ACCEPTED - Failed
– Performing Test C_mavx512f_mfma_FLAG_ACCEPTED
– Performing Test C_mavx512f_mfma_FLAG_ACCEPTED - Success
– Performing Test C_mavx512f_mfma_COMPILE_WORKS
– Performing Test C_mavx512f_mfma_COMPILE_WORKS - Success
– Performing Test CXX_xCORE_AVX512_qopt_zmm_usage_high_FLAG_ACCEPTED
– Performing Test CXX_xCORE_AVX512_qopt_zmm_usage_high_FLAG_ACCEPTED - Failed
– Performing Test CXX_xCORE_AVX512_FLAG_ACCEPTED
– Performing Test CXX_xCORE_AVX512_FLAG_ACCEPTED - Failed
– Performing Test CXX_mavx512f_mfma_FLAG_ACCEPTED
– Performing Test CXX_mavx512f_mfma_FLAG_ACCEPTED - Success
– Performing Test CXX_mavx512f_mfma_COMPILE_WORKS
– Performing Test CXX_mavx512f_mfma_COMPILE_WORKS - Success
– Detecting flags to enable runtime detection of AVX-512 units on newer CPUs - -mavx512f -mfma
– Performing Test _Wno_unused_command_line_argument_FLAG_ACCEPTED
– Performing Test _Wno_unused_command_line_argument_FLAG_ACCEPTED - Success
– Performing Test _callconv___vectorcall
– Performing Test _callconv___vectorcall - Failed
– Performing Test _callconv___regcall
– Performing Test _callconv___regcall - Failed
– Performing Test callconv
– Performing Test callconv - Success
– The GROMACS-managed build of FFTW 3 will configure with the following optimizations: --enable-sse2;–enable-avx;–enable-avx2
– Using external FFT library - FFTW3 build managed by GROMACS
– A library with BLAS API not found. Please specify library location.
– Using GROMACS built-in BLAS.
– LAPACK requires BLAS
– A library with LAPACK API not found. Please specify library location.
– Using GROMACS built-in LAPACK.
– Checking for dlopen
– Performing Test HAVE_DLOPEN
– Performing Test HAVE_DLOPEN - Success
– Checking for dlopen - found
– GROMACS only supports plugins in a build that uses shared libraries, which can be disabled for various reasons. BUILD_SHARED_LIBS=on and a toolchain that supports dynamic linking is required. (Hint: GMX_PREFER_STATIC_LIBS and GMX_BUILD_MDRUN_ONLY can influence the default BUILD_SHARED_LIBS, so if you need plugins, reconsider those choices.)
– Not using dynamic plugins (e.g VMD-supported file formats)
– Using default binary suffix: “”
– Using default library suffix: “”
– Could not convert sample image, ImageMagick convert can not be used. A possible way to fix it can be found here: https://alexvanderbist.com/2018/fixing-imagick-error-unauthorized
Traceback (most recent call last):
File “”, line 1, in
ModuleNotFoundError: No module named ‘pygments’
– Could NOT find Sphinx (missing: SPHINX_EXECUTABLE pygments) (Required is at least version “1.6.1”)
– Performing Test HAVE_NO_DEPRECATED_COPY
– Performing Test HAVE_NO_DEPRECATED_COPY - Success
– Performing Test HAS_NO_STRINGOP_TRUNCATION
– Performing Test HAS_NO_STRINGOP_TRUNCATION - Success
– Performing Test HAS_NO_UNUSED_MEMBER_FUNCTION
– Performing Test HAS_NO_UNUSED_MEMBER_FUNCTION - Success
– Performing Test HAS_NO_REDUNDANT_MOVE
– Performing Test HAS_NO_REDUNDANT_MOVE - Success
– Performing Test HAS_NO_UNUSED
– Performing Test HAS_NO_UNUSED - Success
– Performing Test HAS_NO_UNUSED_PARAMETER
– Performing Test HAS_NO_UNUSED_PARAMETER - Success
– Performing Test HAS_NO_MISSING_DECLARATIONS
– Performing Test HAS_NO_MISSING_DECLARATIONS - Success
– Performing Test HAS_NO_NULL_CONVERSIONS
– Performing Test HAS_NO_NULL_CONVERSIONS - Success
– Performing Test HAS_DECL_IN_SOURCE
– Performing Test HAS_DECL_IN_SOURCE - Failed
– Performing Test HAS_NO_CLASS_MEMACCESS
– Performing Test HAS_NO_CLASS_MEMACCESS - Success
– Check if the system is big endian
– Searching 16 bit integer
– Using unsigned short
– Check if the system is big endian - little endian
– Looking for inttypes.h
– Looking for inttypes.h - found
– Configuring done
cmake doesn’t need to find your gpu. But of course mdrun does need to detect it and have access to it.
The build issue is that compute_30 is no longer supported by cuda 11. You need to or use cuda 10, or remove this compiler option, or used the lastest Gromacs 2020 or 2021 release where this option is removed when cuda 11 is used.
Note: In some cases, the CMake program cannot detect the installed NVidia GPU. For this case, please follow this step: (Maybe this issue is needed to be fix by GROMACS developer)
7-2-1. Find the ‘nvidia-smi’ program in your computer.
Usually, it will be C:\Program Files\NVIDIA Corporation\NVSMI
(or in my case with driver version 465.21: C:\Windows\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_3621da861144492b/nvidia-smi.exe)
7-2-2. Using the control pannel, add the above location in PATH variable.
7-2-3. Open the file gromacs-(version)\cmake\gmxDetectGpu.cmake
7-2-4. In line 92, change exec_progam( ${_nvidia_smi_path} ARGS -L as exec_program( nvidia-smi ARGS -L
7-2-5. Run the configure again.
No, there is nothing wrong with the CMake output, the GPU detection is only there to notify people if they may have hardware in their machine that they could use if they are not doing a GPU build. That a GPU is not found is just a status message and does not influence the outcome of the configure / compilation process.
Hi all,
I’ve been running Gromacs in WSL2 for just over a year now. Everything’s fine until I decided to reinstall with added GPU support. Despite some NVIDIA docker tools working, for example: service docker start / docker run hello-world, I can’t get Gromacs to see the GPU nor does nvidia-smi report a GPU. I’ve trawled the Internet for suggestions and as is typical I’ve tried several ‘solutions’ with no success. Here is a bit about my setup, if someone can shed some light in the form of a bullet proof set of steps to get this working whilst keeping all installations basic and at a system level i.e., no conda envs or docker envs etc., that would be great.
Gromacs: 2021.2 Win 10 (Development Build as required for WSL to see the NVIDIA Win drivers): 21382.1 Linux DESKTOP-L2JG9M2 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux NVIDIA Quadro RTX 4000 Version 461.55