Gromacs 2024.2 giving weird result!

GROMACS version:
:-) GROMACS - gmx, 2024.2 (-:

Executable: /usr/local/gromacs-2024.2/bin/gmx_mpi
Data prefix: /usr/local/gromacs-2024.2
Working dir: /home/adglab/Desktop/GSK3beta
Command line:
gmx --version

GROMACS version: 2024.2
Precision: mixed
Memory model: 64 bit
MPI library: MPI
MPI library version: Open MPI v4.1.6, package: Debian OpenMPI, ident: 4.1.6, repo rev: v4.1.6, Sep 30, 2023
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: CUDA
NBNxM GPU setup: super-cluster 2x2x2 / cluster 8
SIMD instructions: AVX_512
CPU FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
GPU FFT library: cuFFT
Multi-GPU FFT: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/gcc-13 GNU 13.3.0
C compiler flags: -fexcess-precision=fast -funroll-all-loops -march=skylake-avx512 -Wno-missing-field-initializers -O3 -DNDEBUG
C++ compiler: /usr/bin/g+±13 GNU 13.3.0
C++ compiler flags: -fexcess-precision=fast -funroll-all-loops -march=skylake-avx512 -Wno-missing-field-initializers -Wno-cast-function-type-strict SHELL:-fopenmp -O3 -DNDEBUG
BLAS library: Internal
LAPACK library: Internal
CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2024 NVIDIA Corporation;Built on Tue_Oct_29_23:50:19_PDT_2024;Cuda compilation tools, release 12.6, V12.6.85;Build cuda_12.6.r12.6/compiler.35059454_0
CUDA compiler flags:-std=c++17;–generate-code=arch=compute_50,code=sm_50;–generate-code=arch=compute_52,code=sm_52;–generate-code=arch=compute_60,code=sm_60;–generate-code=arch=compute_61,code=sm_61;–generate-code=arch=compute_70,code=sm_70;–generate-code=arch=compute_75,code=sm_75;–generate-code=arch=compute_80,code=sm_80;–generate-code=arch=compute_86,code=sm_86;–generate-code=arch=compute_89,code=sm_89;–generate-code=arch=compute_90,code=sm_90;-Wno-deprecated-gpu-targets;–generate-code=arch=compute_53,code=sm_53;–generate-code=arch=compute_80,code=sm_80;-use_fast_math;-Xptxas;-warn-double-usage;-Xptxas;-Werror;-D_FORCE_INLINES;-Xcompiler;-fopenmp;-fexcess-precision=fast -funroll-all-loops -march=skylake-avx512 -Wno-missing-field-initializers -Wno-cast-function-type-strict SHELL:-fopenmp -O3 -DNDEBUG
CUDA driver: 12.60
CUDA runtime: 12.60

GROMACS modification: No
Here post your question

Hi All,

I have recently upgraded to Gromacs 2024.2. Previously I have used Gromacs 2023 and ran few simulations, which gave good result. Speed was around 59 ns/day.

But recently I uninstalled the previous build and installed the new one. Since then, the speed has not been consistent. It started with 36 ns/day, then suddenly increased to 136 ns/day, and now again suddenly it increased to 222ns/day. And all the result I’m getting is very weird, all the RMSD values are very high, almost 8-10 nm.

There is no point of having a bad ligand which doesn’t bind. Since I have run simulation for the same set of protein-ligand in Gromacs 2023, and gave very good result, RMSD was around 0.3-0.4nm throughout the simulation. And after running it in Gromacs 2024.2, sudeenly it is getting as high as 8nm. Which doesn’t seem logical to me.
And also the speed getting excptionally high suddenly is suspicious.
Is there anything wrong while building the gromacs?

Did all the tests pass?

How many simulations did you run in 2023 with low RMSD? How many have you run in 2024 with high RMSD? There is no use comparing just one or two simulations from each version. Have you run at least four in each version? Is there a statistically significant difference in the RMSDs between the versions, when comparing the same system?

Yes, all the tests passed during the installation of the new version (2024.2). However, it is an MPI-enabled version, even though I am running simulations on a single-node workstation.

I conducted 10 simulations using Gromacs 2024.2, all of which resulted in very high RMSD values. In contrast, simulations run on Gromacs 2023 (4 simulations) produced consistent RMSD values with no sudden spikes or abnormally high RMSD. I switched to the new version due to a bug in the gmx anaeig function in Gromacs 2023, which led me to uninstall it.

Should I rebuild the latest version and compare the results to check for consistency?

Using an MPI enabled version shouldn’t affect the results.

It seems like you’ve done quite thorough testing. Are you using different TPRs for all 14 simulations? Or are you using the same TPR? Does the same TPR in 2023 give a different result in 2024?

I can’t think of anything that should cause consistently different results in 2024. So, please open an issue (Issues · GROMACS / GROMACS · GitLab), preferably with input to reproduce your observations, if you are sure it’s not just random differences.

I generated the TPR files separately for each set of simulations. For the 10 simulations run with Gromacs 2024.2, the TPR files were identical, and similarly, the 4 simulations run with Gromacs 2023 used identical TPR files.

I haven’t yet tested whether the TPR files generated with Gromacs 2023 produce the same results in Gromacs 2024.2, but I will check and update you.

Hi,

I ran the MD run with old TPR file (generated from Gromacs 2023), and got quite similar result (although the other one run with new TPR file had even more fluctuations at sevral timepoints), as I got for the new TPR (generated from Gromacs 2024.2).
So clearly, topology file is not the issue.

I am attaching a screenshot of the RMSD plot after plotting it in XMgrace.

The plot is showing very high fluctuation at around 55ns, then again around 65ns.
Earlier when I ran the MD run with Gromacs 2023, RMSD was below 1 nm throughout the MD run.

While I selected “Backbone” for least square fit (lsq), and “LIG1” for RMSD calculation, but if I’m calculating RMSD for LIG1, isn’t selecting “LIG1” for both lsq fit and RMSD calculation will be reasonable?

What you see in that RMSD plot are not large fluctuations. They’re artifacts of PBC crossings. There are multiple forum threads about that, see e.g. RMSD shoots up.

Thanks for the help.

Actually I have re-centered the trajectory using the following set of command in order.
gmx trjconv -s md_0_100.tpr -f md_0_100.xtc -o md_whole.xtc -pbc whole

Then,
gmx trjconv -s md_0_100.tpr -f md_whole.xtc -pbc mol -o md_mol.xtc

And finally,
gmx trjconv -s md_0_100.tpr -f md_mol.xtc -ur compact -o md_center.xtc

Then, finally calculated the rms using the final trajectory which gave the RMSD plot I have attached in the first message in this thread.

When I watched the trajectory, it was mostly looking good, with the complex being intact most of the times, no residues flying around.
Is there anything else that I can try out to minimize those PBC artifacts?
However, I calculated the RMSD again using same XTC file, but this time selecting “backbone” for both lsq fit and RMSD calculation, which gave a nice plot as follows,

You need to avoid jumps across the periodic boundary as well. Try:

gmx trjconv -s md_0_100.tpr -f md_0_100.xtc -o md_whole.xtc -pbc whole
gmx trjconv -s md_0_100.tpr -f md_whole.xtc -pbc nojump -o md_nojump.xtc

You may want to do a gmx trjconv -s md_0_100.tpr -f md_nojump.xtc -c -o md_center.xtc as well.

You can use other options if you want to visualise the trajectory - analysis and viewing may need different trajectories.

I followed your suggestion. And the plot came very nice. Thank you.