Gormacs installation-enabling GPU

GROMACS version: 2021.3
GROMACS modification: Yes/No
Here post your question

Hi,
i have recently installed Gromacs on ubuntu platform.
I used following to cmake:
cmake … -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DMPI_C_COMPILER=mpicc -DGMX_MPI=on

I am getting following error at the end:

/usr/bin/ld: …/…/…/…/lib/libmdrun_test_infrastructure.a(moduletest.cpp.o): undefined reference to symbol ‘MPI_Barrier’
/usr/bin/ld: /usr/lib/x86_64-linux-gnu/libmpich.so: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[3]: *** [src/programs/mdrun/tests/CMakeFiles/mdrun-tpi-test.dir/build.make:113: bin/mdrun-tpi-test] Error 1
make[2]: *** [CMakeFiles/Makefile2:6646: src/programs/mdrun/tests/CMakeFiles/mdrun-tpi-test.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:2713: CMakeFiles/check.dir/rule] Error 2
make: *** [Makefile:249: check] Error 2

I did “make install” after this.

running an md job is giving me following error:
gmx mdrun -deffnm md_0_1 -nb gpu

GROMACS version: 2021.3
Precision: mixed
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: disabled
SIMD instructions: AVX_512
FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 9.3.0
C compiler flags: -mavx512f -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 9.3.0
C++ compiler flags: -mavx512f -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp

Running on 1 node with total 20 cores, 40 logical cores
Hardware detected:
CPU info:
Vendor: Intel
Brand: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
Family: 6 Model: 85 Stepping: 7
Number of AVX-512 FMA units: 2


Please let me know how to resolve the issue and best use the GPU capabilities.
Thank you

There is no error in your output, but I assume it stated that your binary does not have GPU support. That’s because you build without GPU support as you can see from the " GPU support: disabled" in the version header.
Make sure to pass -DGMX_GPU= and specify your GPU platform (CUDA or OpenCL).

Secondly, your install commands should build an MPI build, but that seems to fail, then you run the gmx mdrun binary which is not an MPI binary (because " MPI library: thread_mpi" indicates it is not using lib-MPI). If you want to run multi-node, make sure you have a functioning MPI build.

Hi Thank you for the reply.
I used this command to do cmake:

cmake … -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DMPI_C_COMPILER=mpicc -DGMX_MPI=on

I specified CUDA for -DGMX_GPU

Please see and guide how to best use the GPU capabilities.

Thank you

If you did that your binary should report “GPU support: CUDA” in the gmx --version output. Have you verified that it does?

Hi Thanks again.
I checked my system for CUDA.
==> when I give command:
“nvidia-smi”,
following is the output:

±----------------------------------------------------------------------------+
| NVIDIA-SMI 495.29.05 Driver Version: 495.29.05 CUDA Version: 11.5 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 4000 On | 00000000:17:00.0 On | N/A |
| 30% 41C P8 17W / 125W | 274MiB / 7974MiB | 5% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

I re did the cmake command:
==> “cmake … -DGMX_GPU=CUDA”
output:
– The GROMACS-managed build of FFTW 3 will configure with the following optimizations: --enable-sse2;–enable-avx;–enable-avx2;–enable-avx512
– Configuring done
– Generating done
– Build files have been written to: /home/mypc/Documents/software/gromacs-2021.3/build

==> after this I gave: “make”
output:
[ 1%] Built target fftwBuild
[ 1%] Built target scanner
[ 2%] Generating release version information
[ 2%] Built target release-version-info
[ 4%] Built target tng_io_obj
[ 4%] Built target tng_io_zlib
[ 4%] Built target lmfit_objlib
[ 5%] Built target thread_mpi
[ 25%] Built target linearalgebra
[ 26%] Built target modularsimulator
[ 93%] Built target libgromacs
[ 94%] Built target gmxapi
[ 95%] Built target nblib
[ 95%] Built target methane-water-integration
[ 95%] Built target argon-forces-integration
[ 95%] Built target gtest
[ 95%] Built target gmock
[ 98%] Built target testutils
[100%] Built target gmx_objlib
[100%] Built target mdrun_objlib
[100%] Built target view_objlib
[100%] Built target gmx

==> followed by “make check”
output:
[ 0%] Built target mdrun_objlib
[ 0%] Built target scanner
[ 1%] Generating release version information
[ 1%] Built target release-version-info
[ 2%] Built target fftwBuild
[ 3%] Built target tng_io_obj
[ 3%] Built target tng_io_zlib
[ 3%] Built target lmfit_objlib
[ 4%] Built target thread_mpi
[ 18%] Built target linearalgebra
[ 19%] Built target modularsimulator
[ 67%] Built target libgromacs
[ 68%] Built target gmx_objlib
[ 68%] Built target view_objlib
[ 68%] Built target gmx
[ 68%] Built target gmxtests
[ 68%] Built target gtest
[ 68%] Built target gmock
[ 70%] Built target testutils
[ 70%] Built target mdrun_test_infrastructure
[ 71%] Built target gmxapi
[ 72%] Built target workflow-details-test
[ 73%] Built target nblib
[ 73%] Built target nblib_test_infrastructure
[ 74%] Built target nblib-setup-test
[ 74%] Built target methane-water-integration
[ 74%] Built target argon-forces-integration
[ 74%] Built target nblib-integrator-test
[ 75%] Built target nblib-integration-test
[ 75%] Built target nblib-tests
[ 75%] Built target nblib-listed-forces-test
[ 75%] Built target nblib-util-test
[ 75%] Built target testutils-mpi-test
[ 75%] Built target testutils-test
[ 75%] Built target utility-mpi-test
[ 77%] Built target utility-test
[ 77%] Linking CXX executable …/…/…/…/bin/mdlib-test
[ 78%] Built target mdlib-test
[ 79%] Built target awh-test
[ 79%] Built target density_fitting_applied_forces-test
[ 79%] Built target applied_forces-test
[ 79%] Built target listed_forces-test
[ 79%] Built target onlinehelp-test-shared
[ 80%] Built target commandline-test
[ 81%] Built target domdec-mpi-test
[ 81%] Built target domdec-test
[ 82%] Built target ewald-test
[ 82%] Built target fft-test
[ 82%] Linking CXX executable …/…/…/…/bin/gpu_utils-test
[ 82%] Built target gpu_utils-test
[ 83%] Built target hardware-test
[ 84%] Built target math-test
[ 84%] Built target mdrunutility-test-shared
[ 84%] Built target mdrunutility-mpi-test
[ 84%] Built target mdrunutility-test
[ 85%] Built target mdspan-test
[ 85%] Built target mdtypes-test
[ 85%] Built target onlinehelp-test
[ 86%] Built target options-test
[ 86%] Built target pbcutil-test
[ 86%] Built target random-test
[ 86%] Built target restraintpotential-test
[ 86%] Built target table-test
[ 86%] Built target taskassignment-test
[ 86%] Built target topology-test
[ 86%] Built target pull-test
[ 87%] Built target simd-test
[ 87%] Built target compat-test
[ 87%] Built target gmxana-test
[ 88%] Built target pdb2gmx3-test
[ 88%] Built target pdb2gmx2-test
[ 88%] Built target pdb2gmx1-test
[ 89%] Built target gmxpreprocess-test
[ 89%] Built target correlations-test
[ 89%] Built target analysisdata-test-shared
[ 89%] Built target analysisdata-test
[ 90%] Built target coordinateio-test
[ 91%] Built target trajectoryanalysis-test
[ 91%] Built target energyanalysis-test
[ 92%] Built target tool-test
[ 92%] Built target fileio-test
[ 93%] Built target selection-test
[ 93%] Linking CXX executable …/…/…/…/bin/mdrun-tpi-test
/usr/bin/ld: …/…/…/…/lib/libmdrun_test_infrastructure.a(moduletest.cpp.o): undefined reference to symbol ‘MPI_Barrier’
/usr/bin/ld: /usr/lib/x86_64-linux-gnu/libmpich.so: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[3]: *** [src/programs/mdrun/tests/CMakeFiles/mdrun-tpi-test.dir/build.make:113: bin/mdrun-tpi-test] Error 1
make[2]: *** [CMakeFiles/Makefile2:6646: src/programs/mdrun/tests/CMakeFiles/mdrun-tpi-test.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:2713: CMakeFiles/check.dir/rule] Error 2
make: *** [Makefile:249: check] Error 2

it gave errors.

now: “gmx --version”
output:
GROMACS version: 2021.3
Precision: mixed
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: disabled
SIMD instructions: AVX_512
FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 9.3.0
C compiler flags: -mavx512f -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 9.3.0
C++ compiler flags: -mavx512f -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp

there are few errors here also in C compiler and it is still saying GPU support: disabled

Please see and guide

Please provide the best possible solution to help resolve this issue.

Thank you

Hi,

I have done the complete installation again.
now the gmx --version shows that GPU is supported.
see below:
gmx pdb2gmx --version

GROMACS version: 2021.3
Verified release checksum is
Precision: mixed
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: CUDA
SIMD instructions: AVX_512
FFT library: fftw-3.3.8-sse2-avx
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 8.4.0
C compiler flags: -mavx512f -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 8.4.0
C++ compiler flags: -mavx512f -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler: /usr/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on Sun_Jul_28_19:07:16_PDT_2019;Cuda compilation tools, release 10.1, V10.1.243
CUDA compiler flags:-std=c++14;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_35,code=compute_35;-gencode;arch=compute_32,code=compute_32;-use_fast_math;-D_FORCE_INLINES;-mavx512f -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver: 11.40
CUDA runtime: 10.10

I do see: “-Wno-missing-field-initializers” in C++ compiler section.

1 Like

Good, so mdrun should now be able to utilize an NVIDIA GPU.

That is very much expected, it is a compiler flag used.

Thank you for your feedback.

How did you fix it?

Hi I have the same problem
may you please help me?

After completing the installation with -DGMX_GPU=CUDA (or your preference) execute the following command

source /usr/local/gromacs/bin/GMXRC

After this whenever you need GPU support run the same command before starting your mdrun and proceed with your run (or verify with gmx --version)