Successful OpenCL installation returns disabled GPU support

GROMACS version: 2022.3
GROMACS modification: No
Hello everyone! I’m having trouble configuring GPU support in Windows Subsystem for Linux (Ubuntu 20.04). trying to set up OpenCL support for AMD. The issue is there are seemingly no issues at this point, the installation is successful, to the point that opencl files are installed by 'make install :

– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/gpu_utils/device_utils.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/gpu_utils/vectype_ops.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/pbcutil/ishift.h
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/nbnxm/opencl/nbnxm_ocl_consts.h
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/nbnxm/opencl/nbnxm_ocl_kernel.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/nbnxm/opencl/nbnxm_ocl_kernel_pruneonly.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/nbnxm/opencl/nbnxm_ocl_kernel_utils.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/nbnxm/opencl/nbnxm_ocl_kernels.cl
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/nbnxm/opencl/nbnxm_ocl_kernels.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/nbnxm/opencl/nbnxm_ocl_kernels_fastgen.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/nbnxm/opencl/nbnxm_ocl_kernels_fastgen_add_twincut.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/ewald/pme_gather.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/ewald/pme_gpu_calculate_splines.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/ewald/pme_gpu_types.h
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/ewald/pme_program.cl
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/ewald/pme_solve.clh
– Up-to-date: /usr/local/share/gromacs/opencl/gromacs/ewald/pme_spread.clh

However when i call gmx --version afterwards, it returns this:
GROMACS version: 2022.3
Precision: mixed
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: disabled
SIMD instructions: AVX2_128
CPU FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128
GPU FFT library: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 9.3.0
C compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 9.3.0
C++ compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp

Has anyone dealt with this issue?

Hello!

The output you’re observing is quite unusual. Could you please share the cmake invocation you used?

And, just as a sanity check, what’s the output of ./bin/gmx -version when called from the build directory?

cmake … -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON DGMX_GPU=OpenCL -DGMX_MPI=on

as for the sanity check, ./bin/gmx: No such file or directory when called from the build directory and

 > :-) GROMACS - gmx, 2022.3 (-:
> 
> Executable:   /usr/local/bin/gmx
> Data prefix:  /usr/local
> Working dir:  /mnt/d/.../gromacs-2022.3/build
> Command line:
>   gmx -version
> 
> GROMACS version:    2022.3
> Precision:          mixed
> Memory model:       64 bit
> MPI library:        thread_mpi
> OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 128)
> GPU support:        disabled
> SIMD instructions:  AVX2_128
> CPU FFT library:    fftw-3.3.8-sse2-avx-avx2-avx2_128
> GPU FFT library:    none
> RDTSCP usage:       enabled
> TNG support:        enabled
> Hwloc support:      disabled
> Tracing support:    disabled
> C compiler:         /usr/bin/cc GNU 9.3.0
> C compiler flags:   -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
> C++ compiler:       /usr/bin/c++ GNU 9.3.0
> C++ compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp

when called from /usr/local/

You’re building GROMACS with library MPI, so the binary is called gmx_mpi, not gmx. You probably have some other, non-MPI and non-GPU, GROMACS installed

Huh, turns out you’re right. gmx and gmx_mpi are both sourced and found by wsl, the latter has OpenCL. Tested the mpi version in the tutorial notebook and the temperature equilibration seems to take 16 minutes instead of 16-24 hours now, thank you so much! Guess I’m just going to use gmx_mpi from now on, unless there is a way to replace old gmx with it

Though apparently mrdun doesn’t detect any devices, which might be a hardware issue, however.

Executable:   /usr/local/bin/gmx_mpi
Data prefix:  /usr/local
Working dir:  /mnt/d/.../gromacs-2022.3/build
Command line:
  gmx_mpi -version

GROMACS version:    2022.3
Precision:          mixed
Memory model:       64 bit
MPI library:        MPI
OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support:        OpenCL
SIMD instructions:  AVX2_128
CPU FFT library:    fftw-3.3.8-sse2-avx-avx2-avx2_128
GPU FFT library:    clFFT
RDTSCP usage:       enabled
TNG support:        enabled
Hwloc support:      disabled
Tracing support:    disabled
C compiler:         /usr/bin/cc GNU 9.3.0
C compiler flags:   -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler:       /usr/bin/c++ GNU 9.3.0
C++ compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
OpenCL include dir: /usr/include
OpenCL library:     /usr/lib/x86_64-linux-gnu/libOpenCL.so
OpenCL version:     2.2

You can build GROMACS with threadMPI instead of library MPI by setting -DGMX_MPI=OFF. It will compile (and later install) the gmx binary. MPI and GPU support are mostly independent.

If you are running on a single machine and not over multiple nodes, the threadMPI build is likely to be more efficient.

There are other CMake settings to directly control the name of the compiled binary, but it’s not a good idea to change them.

Detected devices are not reported in the -version output. You should check the md.log file.

Pretty much what I thought. I do have a discrete amd graphics card, nbut it might not be supported by wsl.
As for the thread-mpi compilation, i do have gmx installed, and it seems to function much worse than the mpi package? But i will try reinstalling it.

Running on 1 node with total 1 cores, 8 processing units (GPU detection failed)
Hardware detected on host (the node of MPI rank 0):
  CPU info:
    Vendor: AMD
    Brand:  AMD Ryzen 7 2700U with Radeon Vega Mobile Gfx  
    Family: 23   Model: 17   Stepping: 0
    Features: aes amd apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt lahf misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdrnd rdtscp sha sse2 sse3 sse4a sse4.1 sse4.2 ssse3
  Hardware topology: Basic
    Packages, cores, and logical processors:
    [indices refer to OS logical processors]
      Package  0: [   0   1   2   3   4   5   6   7]
    CPU limit set by OS: -1   Recommended max number of threads: 8

Upon some Googling, looks like WSL does not support OpenCL: https://github.com/microsoft/WSL/issues/6951

There are announcements from Intel about adding support for their GPUs: OneAPI/L0, OpenVINO and OpenCL coming to the Windows Subsystem for Linux for Intel GPUs - Windows Command Line, but I could not find anything about the AMD support