GROMACS version:
GROMACS modification: Yes/No
I have run 100ns protein-ligand simulation. However, the gmx hbond function gave a very weird result,
command: gmx hbond -f md_0_100_center.xtc -s md_0_100.tpr -tu ns -g hbond.log -num hbnum.xvg
Output:
:-) GROMACS - gmx hbond, 2023 (-:
Executable: /home/adglab/Desktop/GROMACS/gromacs-2023/build/bin/gmx
Data prefix: /home/adglab/Desktop/GROMACS/gromacs-2023 (source tree)
Working dir: /home/adglab/Gromacs_trial_run/Analysis/H-bond_100ns
Command line:
gmx hbond -f md_0_100_center.xtc -s md_0_100.tpr -tu ns -g hbond.log -num hbnum.xvg
Reading file md_0_100.tpr, VERSION 2023 (single precision)
Specify 2 groups to analyze:
Group 0 ( System) has 54575 elements
Group 1 ( Protein) has 5570 elements
Group 2 ( Protein-H) has 2765 elements
Group 3 ( C-alpha) has 345 elements
Group 4 ( Backbone) has 1035 elements
Group 5 ( MainChain) has 1379 elements
Group 6 ( MainChain+Cb) has 1708 elements
Group 7 ( MainChain+H) has 1699 elements
Group 8 ( SideChain) has 3871 elements
Group 9 ( SideChain-H) has 1386 elements
Group 10 ( Prot-Masses) has 5570 elements
Group 11 ( non-Protein) has 49005 elements
Group 12 ( Other) has 34 elements
Group 13 ( LIG1) has 34 elements
Group 14 ( CL) has 8 elements
Group 15 ( Water) has 48963 elements
Group 16 ( SOL) has 48963 elements
Group 17 ( non-Water) has 5612 elements
Group 18 ( Ion) has 8 elements
Group 19 ( Water_and_ions) has 48971 elements
Select a group: 1
Selected 1: ‘Protein’
Select a group: 13
Selected 13: ‘LIG1’
Checking for overlap in atoms between Protein and LIG1
Calculating hydrogen bonds between Protein (5570 atoms) and LIG1 (34 atoms)
Found 504 donors and 979 acceptors
Reading frame 0 time 0.000
Will do grid-search on 21x21x15 grid, rcut=0.34999999
Frame loop parallelized with OpenMP using 48 threads.
Last frame 10000 time 100.000
Back Off! I just backed up hbnum.xvg to ./#hbnum.xvg.1#
Average number of hbonds per timeframe -832023.980 out of 246708 possible
GROMACS reminds you: “UNIX is basically a simple operating system. It just takes a genius to understand its simplicity.” (Dennis Ritchie)
While, I analysed the H-bonds using VMD, which gives a nice plot like this,
Any explanation what might have gone wrong in my case?
My simulation stopped at 97.78 ns due to power cut, which I resumed later on by adding -cpi to this, “gmx mdrun -deffnm md_0_100” command. Is this why the problem is appearing?
Same tpr and xtc file I have used to generate RMSD and RMSF plot, which ran successfully and have the desired patterns.
I have previously run a 2ns trial run, for which there was no problem as such. This is the same protein-ligand molecule I’m running, just for a higher time duration.
GROMACS version is:
:-) GROMACS - gmx, 2023 (-:
Executable: /home/adglab/Desktop/GROMACS/gromacs-2023/build/bin/gmx
Data prefix: /home/adglab/Desktop/GROMACS/gromacs-2023 (source tree)
Working dir: /home/adglab/Gromacs_trial_run/Analysis/H-bond_100ns
Command line:
gmx --version
GROMACS version: 2023
Precision: mixed
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: CUDA
NB cluster size: 8
SIMD instructions: AVX_512
CPU FFT library: fftw-3.3.10-sse2-avx
GPU FFT library: cuFFT
Multi-GPU FFT: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 13.3.0
C compiler flags: -fexcess-precision=fast -funroll-all-loops -mavx512f -mfma -mavx512vl -mavx512dq -mavx512bw -Wno-missing-field-initializers -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 13.3.0
C++ compiler flags: -fexcess-precision=fast -funroll-all-loops -mavx512f -mfma -mavx512vl -mavx512dq -mavx512bw -Wno-missing-field-initializers -Wno-cast-function-type-strict SHELL:-fopenmp -O3 -DNDEBUG
BLAS library:
LAPACK library:
CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2024 NVIDIA Corporation;Built on Tue_Oct_29_23:50:19_PDT_2024;Cuda compilation tools, release 12.6, V12.6.85;Build cuda_12.6.r12.6/compiler.35059454_0
CUDA compiler flags:-std=c++17;–generate-code=arch=compute_50,code=sm_50;–generate-code=arch=compute_52,code=sm_52;–generate-code=arch=compute_60,code=sm_60;–generate-code=arch=compute_61,code=sm_61;–generate-code=arch=compute_70,code=sm_70;–generate-code=arch=compute_75,code=sm_75;–generate-code=arch=compute_80,code=sm_80;–generate-code=arch=compute_86,code=sm_86;–generate-code=arch=compute_89,code=sm_89;–generate-code=arch=compute_90,code=sm_90;-Wno-deprecated-gpu-targets;–generate-code=arch=compute_53,code=sm_53;–generate-code=arch=compute_80,code=sm_80;-use_fast_math;-Xptxas;-warn-double-usage;-Xptxas;-Werror;-D_FORCE_INLINES;-fexcess-precision=fast -funroll-all-loops -mavx512f -mfma -mavx512vl -mavx512dq -mavx512bw -Wno-missing-field-initializers -Wno-cast-function-type-strict SHELL:-fopenmp -O3 -DNDEBUG
CUDA driver: 0.0
CUDA runtime: 12.60
GROMACS modifications: No