GROMACS version:
:-) GROMACS - gmx, 2024.2 (-:
Executable: /usr/local/gromacs-2024.2/bin/gmx_mpi
Data prefix: /usr/local/gromacs-2024.2
Working dir: /home/adglab/Desktop/GSK3beta
Command line:
gmx --version
GROMACS version: 2024.2
Precision: mixed
Memory model: 64 bit
MPI library: MPI
MPI library version: Open MPI v4.1.6, package: Debian OpenMPI, ident: 4.1.6, repo rev: v4.1.6, Sep 30, 2023
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: CUDA
NBNxM GPU setup: super-cluster 2x2x2 / cluster 8
SIMD instructions: AVX_512
CPU FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
GPU FFT library: cuFFT
Multi-GPU FFT: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/gcc-13 GNU 13.3.0
C compiler flags: -fexcess-precision=fast -funroll-all-loops -march=skylake-avx512 -Wno-missing-field-initializers -O3 -DNDEBUG
C++ compiler: /usr/bin/g+±13 GNU 13.3.0
C++ compiler flags: -fexcess-precision=fast -funroll-all-loops -march=skylake-avx512 -Wno-missing-field-initializers -Wno-cast-function-type-strict SHELL:-fopenmp -O3 -DNDEBUG
BLAS library: Internal
LAPACK library: Internal
CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2024 NVIDIA Corporation;Built on Tue_Oct_29_23:50:19_PDT_2024;Cuda compilation tools, release 12.6, V12.6.85;Build cuda_12.6.r12.6/compiler.35059454_0
CUDA compiler flags:-std=c++17;–generate-code=arch=compute_50,code=sm_50;–generate-code=arch=compute_52,code=sm_52;–generate-code=arch=compute_60,code=sm_60;–generate-code=arch=compute_61,code=sm_61;–generate-code=arch=compute_70,code=sm_70;–generate-code=arch=compute_75,code=sm_75;–generate-code=arch=compute_80,code=sm_80;–generate-code=arch=compute_86,code=sm_86;–generate-code=arch=compute_89,code=sm_89;–generate-code=arch=compute_90,code=sm_90;-Wno-deprecated-gpu-targets;–generate-code=arch=compute_53,code=sm_53;–generate-code=arch=compute_80,code=sm_80;-use_fast_math;-Xptxas;-warn-double-usage;-Xptxas;-Werror;-D_FORCE_INLINES;-Xcompiler;-fopenmp;-fexcess-precision=fast -funroll-all-loops -march=skylake-avx512 -Wno-missing-field-initializers -Wno-cast-function-type-strict SHELL:-fopenmp -O3 -DNDEBUG
CUDA driver: 12.60
CUDA runtime: 12.60
GROMACS modification: No
Here post your question
Hi All,
I have recently upgraded to Gromacs 2024.2. Previously I have used Gromacs 2023 and ran few simulations, which gave good result. Speed was around 59 ns/day.
But recently I uninstalled the previous build and installed the new one. Since then, the speed has not been consistent. It started with 36 ns/day, then suddenly increased to 136 ns/day, and now again suddenly it increased to 222ns/day. And all the result I’m getting is very weird, all the RMSD values are very high, almost 8-10 nm.
There is no point of having a bad ligand which doesn’t bind. Since I have run simulation for the same set of protein-ligand in Gromacs 2023, and gave very good result, RMSD was around 0.3-0.4nm throughout the simulation. And after running it in Gromacs 2024.2, sudeenly it is getting as high as 8nm. Which doesn’t seem logical to me.
And also the speed getting excptionally high suddenly is suspicious.
Is there anything wrong while building the gromacs?