Links Warning in medum time steps

GROMACS version:
GROMACS modification: Yes/No
Dear all. I am simulation water diffusion in polyelectrolyte. every thing is very good but the main problem I have is that I can not use ordinary time steps ( like 1 or 2 fs). and I have to use time steps like (0.0002 or less) and when I adjust for bigger time steps I recieve LINKS ERROR. what should I do to benefit from bigger time steps?
Here is my log file .

Thank you very much

                  :-) GROMACS - gmx mdrun, 2020.2 (-:

                        GROMACS is written by:
 Emile Apol      Rossen Apostolov      Paul Bauer     Herman J.C. Berendsen
Par Bjelkmar      Christian Blau   Viacheslav Bolnykh     Kevin Boyd    

Aldert van Buuren Rudi van Drunen Anton Feenstra Alan Gray
Gerrit Groenhof Anca Hamuraru Vincent Hindriksen M. Eric Irrgang
Aleksei Iupinov Christoph Junghans Joe Jordan Dimitrios Karkoulis
Peter Kasson Jiri Kraus Carsten Kutzner Per Larsson
Justin A. Lemkul Viveca Lindahl Magnus Lundborg Erik Marklund
Pascal Merz Pieter Meulenhoff Teemu Murtola Szilard Pall
Sander Pronk Roland Schulz Michael Shirts Alexey Shvetsov
Alfons Sijbers Peter Tieleman Jon Vincent Teemu Virolainen
Christian Wennberg Maarten Wolf Artem Zhmurov
and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright © 1991-2000, University of Groningen, The Netherlands.
Copyright © 2001-2019, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS: gmx mdrun, version 2020.2
Executable: /usr/local/gromacs/bin/gmx
Data prefix: /usr/local/gromacs
Working dir: /media/rezayani/Rezayani1/MD-Rezayani/MD/PPO-projects/Main project/AEM-BR/BTMA/test
Process ID: 2130646
Command line:
gmx mdrun -deffnm nvt5 -v -nt 4

GROMACS version: 2020.2
Verified release checksum is 3f718d436b1ac2d44ce97164df8a13322fc143498ba44eccfd567e20d8aaea1d
Precision: single
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: CUDA
SIMD instructions: AVX2_256
FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 9.3.0
C compiler flags: -mavx2 -mfma -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 9.3.0
C++ compiler flags: -mavx2 -mfma -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler: /usr/local/cuda-11.0/bin/nvcc nvcc: NVIDIA ® Cuda compiler driver;Copyright © 2005-2020 NVIDIA Corporation;Built on Wed_May__6_19:09:25_PDT_2020;Cuda compilation tools, release 11.0, V11.0.167;Build cuda_11.0_bu.TC445_37.28358933_0
CUDA compiler flags:-std=c++14;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_35,code=compute_35;-gencode;arch=compute_50,code=compute_50;-gencode;arch=compute_52,code=compute_52;-gencode;arch=compute_60,code=compute_60;-gencode;arch=compute_61,code=compute_61;-gencode;arch=compute_70,code=compute_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;-D_FORCE_INLINES;-mavx2 -mfma -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver: 11.0
CUDA runtime: N/A

Running on 1 node with total 10 cores, 20 logical cores (GPU detection deactivated)
Hardware detected:
CPU info:
Vendor: Intel
Brand: Genuine Intel® CPU @ 2.10GHz
Family: 6 Model: 63 Stepping: 2
Features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
Hardware topology: Basic
Sockets, cores, and logical processors:
Socket 0: [ 0 10] [ 1 11] [ 2 12] [ 3 13] [ 4 14] [ 5 15] [ 6 16] [ 7 17] [ 8 18] [ 9 19]

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E.
Lindahl
GROMACS: High performance molecular simulations through multi-level
parallelism from laptops to supercomputers
SoftwareX 1 (2015) pp. 19-25
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl
Tackling Exascale Software Challenges in Molecular Dynamics Simulations with
GROMACS
In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.
Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl
GROMACS 4.5: a high-throughput and highly parallel open source molecular
simulation toolkit
Bioinformatics 29 (2013) pp. 845-54
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- — Thank You — -------- --------

++++ PLEASE CITE THE DOI FOR THIS VERSION OF GROMACS ++++


-------- -------- — Thank You — -------- --------

Input Parameters:
integrator = md
tinit = 0
dt = 0.001
nsteps = 400000
init-step = 0
simulation-part = 1
comm-mode = Linear
nstcomm = 100
bd-fric = 0
ld-seed = -1670027595
emtol = 10
emstep = 0.01
niter = 20
fcstep = 0
nstcgsteep = 1000
nbfgscorr = 10
rtpi = 0.05
nstxout = 0
nstvout = 0
nstfout = 0
nstlog = 1000
nstcalcenergy = 100
nstenergy = 1000
nstxout-compressed = 1000
compressed-x-precision = 1000
cutoff-scheme = Verlet
nstlist = 10
pbc = xyz
periodic-molecules = false
verlet-buffer-tolerance = 0.005
rlist = 1
coulombtype = PME
coulomb-modifier = Potential-shift
rcoulomb-switch = 0
rcoulomb = 1
epsilon-r = 1
epsilon-rf = inf
vdw-type = Cut-off
vdw-modifier = Potential-shift
rvdw-switch = 0
rvdw = 0.9
DispCorr = EnerPres
table-extension = 1
fourierspacing = 0.16
fourier-nx = 32
fourier-ny = 32
fourier-nz = 32
pme-order = 4
ewald-rtol = 1e-05
ewald-rtol-lj = 0.001
lj-pme-comb-rule = Geometric
ewald-geometry = 0
epsilon-surface = 0
tcoupl = Berendsen
nsttcouple = 10
nh-chain-length = 0
print-nose-hoover-chain-variables = false
pcoupl = No
pcoupltype = Isotropic
nstpcouple = -1
tau-p = 1
compressibility (3x3):
compressibility[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compressibility[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p (3x3):
ref-p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
refcoord-scaling = No
posres-com (3):
posres-com[0]= 0.00000e+00
posres-com[1]= 0.00000e+00
posres-com[2]= 0.00000e+00
posres-comB (3):
posres-comB[0]= 0.00000e+00
posres-comB[1]= 0.00000e+00
posres-comB[2]= 0.00000e+00
QMMM = false
QMconstraints = 0
QMMMscheme = 0
MMChargeScaleFactor = 1
qm-opts:
ngQM = 0
constraint-algorithm = Lincs
continuation = false
Shake-SOR = false
shake-tol = 0.0001
lincs-order = 4
lincs-iter = 1
lincs-warnangle = 30
nwall = 0
wall-type = 9-3
wall-r-linpot = -1
wall-atomtype[0] = -1
wall-atomtype[1] = -1
wall-density[0] = 0
wall-density[1] = 0
wall-ewald-zfac = 3
pull = false
awh = false
rotation = false
interactiveMD = false
disre = No
disre-weighting = Conservative
disre-mixed = false
dr-fc = 1000
dr-tau = 0
nstdisreout = 100
orire-fc = 0
orire-tau = 0
nstorireout = 100
free-energy = no
cos-acceleration = 0
deform (3x3):
deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
simulated-tempering = false
swapcoords = no
userint1 = 0
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
applied-forces:
electric-field:
x:
E0 = 0
omega = 0
t0 = 0
sigma = 0
y:
E0 = 0
omega = 0
t0 = 0
sigma = 0
z:
E0 = 0
omega = 0
t0 = 0
sigma = 0
density-guided-simulation:
active = false
group = protein
similarity-measure = inner-product
atom-spreading-weight = unity
force-constant = 1e+09
gaussian-transform-spreading-width = 0.2
gaussian-transform-spreading-range-in-multiples-of-width = 4
reference-density-filename = reference.mrc
nst = 1
normalize-densities = true
adaptive-force-scaling = false
adaptive-force-scaling-time-constant = 4
grpopts:
nrdf: 22509
ref-t: 298
tau-t: 1
annealing: No
annealing-npoints: 0
acc: 0 0 0
nfreeze: N N N
energygrp-flags[ 0]: 0

Changing nstlist from 10 to 100, rlist from 1 to 1.054

Using 1 MPI thread

Non-default thread affinity set, disabling internal thread affinity

Using 4 OpenMP threads

System total charge: -0.000
Will do PME sum in reciprocal space for electrostatic interactions.

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- — Thank You — -------- --------

Using a Gaussian width (1/beta) of 0.320163 nm for Ewald
Potential shift: LJ r^-12: -3.541e+00 r^-6: -1.882e+00, Ewald -1.000e-05
Initialized non-bonded Ewald tables, spacing: 9.33e-04 size: 1073

Using SIMD 4x8 nonbonded short-range kernels

Using a dual 4x8 pair-list setup updated with dynamic pruning:
outer list: updated every 100 steps, buffer 0.054 nm, rlist 1.054 nm
inner list: updated every 45 steps, buffer 0.001 nm, rlist 1.001 nm
At tolerance 0.005 kJ/mol/ps per atom, equivalent classical 1x1 list would be:
outer list: updated every 100 steps, buffer 0.190 nm, rlist 1.190 nm
inner list: updated every 45 steps, buffer 0.084 nm, rlist 1.084 nm

Using geometric Lennard-Jones combination rule

Long Range LJ corr.: 5.8169e-04

Removing pbc first time

Initializing LINear Constraint Solver

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess
P-LINCS: A Parallel Linear Constraint Solver for molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 116-122
-------- -------- — Thank You — -------- --------

The number of constraints is 4272
512 constraints are involved in constraint triangles,
will apply an additional matrix expansion of order 4 for couplings
between constraints inside triangles

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, J. P. M. Postma, A. DiNola and J. R. Haak
Molecular dynamics with coupling to an external bath
J. Chem. Phys. 81 (1984) pp. 3684-3690
-------- -------- — Thank You — -------- --------

There are: 9824 Atoms
There are: 896 VSites

Constraining the starting coordinates (step 0)

Constraining the coordinates at t0-dt (step 0)
Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
0: rest
RMS relative constraint deviation after constraining: 9.61e-03
Initial temperature: 315.288 K

Started mdrun on rank 0 Sun Sep 27 22:46:13 2020

       Step           Time
          0        0.00000

Energies (kJ/mol)
Bond Angle Ryckaert-Bell. LJ (SR) Disper. corr.
5.51637e+03 1.95555e+04 1.77367e+04 -3.47180e+03 -3.77691e+03
Coulomb (SR) Coul. recip. Potential Kinetic En. Total Energy
1.71730e+04 2.85789e+03 5.55907e+04 2.97792e+04 8.53699e+04
Conserved En. Temperature Pres. DC (bar) Pressure (bar) Constr. rmsd
8.53699e+04 3.18238e+02 -6.18624e+02 1.37740e+02 9.99801e-03

Received the INT signal, stopping within 100 steps

       Step           Time
        500        0.50000

Writing checkpoint, step 500 at Sun Sep 27 22:46:16 2020

Energies (kJ/mol)
Bond Angle Ryckaert-Bell. LJ (SR) Disper. corr.
5.57665e+03 1.89871e+04 1.77413e+04 -3.37674e+03 -3.77691e+03
Coulomb (SR) Coul. recip. Potential Kinetic En. Total Energy
1.68960e+04 2.92690e+03 5.49743e+04 2.79806e+04 8.29549e+04
Conserved En. Temperature Pres. DC (bar) Pressure (bar) Constr. rmsd
8.30857e+04 2.99018e+02 -6.18624e+02 1.81868e+02 2.04707e-02

<======  ###############  ==>
<====  A V E R A G E S  ====>
<==  ###############  ======>

Statistics over 501 steps using 6 frames

Energies (kJ/mol)
Bond Angle Ryckaert-Bell. LJ (SR) Disper. corr.
5.68568e+03 1.92772e+04 1.77915e+04 -3.37431e+03 -3.77691e+03
Coulomb (SR) Coul. recip. Potential Kinetic En. Total Energy
1.69994e+04 2.85292e+03 5.54555e+04 2.84169e+04 8.38724e+04
Conserved En. Temperature Pres. DC (bar) Pressure (bar) Constr. rmsd
8.39718e+04 3.03680e+02 -6.18624e+02 -2.26640e+02 0.00000e+00

Total Virial (kJ/mol)
1.07632e+04 1.70699e+03 1.30991e+03
1.70601e+03 1.01437e+04 -1.33195e+03
1.30430e+03 -1.32843e+03 9.58882e+03

Pressure (bar)
-4.15209e+02 -5.69296e+02 -4.43640e+02
-5.68977e+02 -2.16625e+02 4.32590e+02
-4.41806e+02 4.31439e+02 -4.80842e+01

M E G A - F L O P S   A C C O U N T I N G

NB=Group-cutoff nonbonded kernels NxN=N-by-N cluster Verlet kernels
RF=Reaction-Field VdW=Van der Waals QSTab=quadratic-spline table
W3=SPC/TIP3p W4=TIP4p (single or pairs)
V&F=Potential and force V=Potential only F=Force only

Computing: M-Number M-Flops % Flops

Pair Search distance check 24.975178 224.777 0.1
NxN Ewald Elec. + LJ [F] 2361.728160 155874.059 77.7
NxN Ewald Elec. + LJ [V&F] 28.613008 3061.592 1.5
NxN LJ [F] 0.481536 15.891 0.0
NxN LJ [V&F] 0.004864 0.209 0.0
NxN Ewald Elec. [F] 494.087616 30139.345 15.0
NxN Ewald Elec. [V&F] 5.983184 502.587 0.3
Calc Weights 16.112160 580.038 0.3
Spread Q Bspline 343.726080 687.452 0.3
Gather F Bspline 343.726080 2062.356 1.0
3D-FFT 492.502038 3940.016 2.0
Solve PME 0.513024 32.834 0.0
Shift-X 0.064320 0.386 0.0
Bonds 2.204400 130.060 0.1
Angles 6.476928 1088.124 0.5
RB-Dihedrals 7.278528 1797.796 0.9
Virial 0.064590 1.163 0.0
Stop-CM 0.075040 0.750 0.0
Calc-Ekin 1.093440 29.523 0.0
Lincs 2.148816 128.929 0.1
Lincs-Mat 49.865408 199.462 0.1
Constraint-V 5.638464 45.108 0.0
Constraint-Vir 0.041760 1.002 0.0
Settle 0.450688 145.572 0.1
Virtual Site 3 0.454272 16.808 0.0

Total 200705.838 100.0

 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 1 MPI rank, each using 4 OpenMP threads

Computing: Num Num Call Wall time Giga-Cycles
Ranks Threads Count (s) total sum %

Vsite constr. 1 4 501 0.020 0.167 0.5
Neighbor search 1 4 6 0.050 0.421 1.2
Force 1 4 501 2.716 22.814 64.5
PME mesh 1 4 501 0.399 3.351 9.5
NB X/F buffer ops. 1 4 996 0.035 0.298 0.8
Vsite spread 1 4 507 0.026 0.216 0.6
Write traj. 1 4 2 0.404 3.390 9.6
Update 1 4 501 0.014 0.117 0.3
Constraints 1 4 503 0.329 2.766 7.8
Rest 0.215 1.805 5.1

Total 4.208 35.345 100.0

Breakdown of PME mesh computation

PME spread 1 4 501 0.179 1.504 4.3
PME gather 1 4 501 0.118 0.988 2.8
PME 3D-FFT 1 4 1002 0.087 0.727 2.1
PME solve Elec 1 4 501 0.014 0.121 0.3

           Core t (s)   Wall t (s)        (%)
   Time:       16.006        4.208      380.4
             (ns/day)    (hour/ns)

Performance: 10.287 2.333
Finished mdrun on rank 0 Sun Sep 27 22:46:17 2020

If your simulation requires impractically small time steps to “work,” then either your topology/force field parameters are unsound or your run settings are not physically sensible.

1 Like