Initial maximum inter charge-group distances:
two-body bonded interactions: 0.446 nm, LJ-14, atoms 36965 36972
multi-body bonded interactions: 0.446 nm, Proper Dih., atoms 36965 36972
Minimum cell size due to bonded interactions: 0.490 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.218 nm
Estimated maximum distance required for P-LINCS: 0.218 nm
Using 1 separate PME ranks
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 11 cells with a minimum initial size of 0.613 nm
The maximum allowed number of cells is: X 24 Y 29 Z 33
Domain decomposition grid 1 x 1 x 11, separate PME ranks 1
PME domain decomposition: 1 x 1 x 1
Interleaving PP and PME ranks
This rank does only particle-particle work.
Domain decomposition rank 0, coordinates 0 0 0
The initial number of communication pulses is: Z 1
The initial domain decomposition cell size is: Z 1.88 nm
The maximum allowed distance for charge groups involved in interactions is:
non-bonded interactions 1.337 nm
(the following are initial values, they could change due to box deformation)
two-body bonded interactions (-rdd) 1.337 nm
multi-body bonded interactions (-rdd) 1.337 nm
atoms separated by up to 5 constraints (-rcon) 1.881 nm
When dynamic load balancing gets turned on, these settings will change to:
The maximum number of communication pulses is: Z 1
The minimum size for domain decomposition cells is 1.337 nm
The requested allowed shrink of DD cells (option -dds) is: 0.80
The allowed shrink of domain decomposition cells is: Z 0.71
The maximum allowed distance for charge groups involved in interactions is:
non-bonded interactions 1.337 nm
two-body bonded interactions (-rdd) 1.337 nm
multi-body bonded interactions (-rdd) 1.337 nm
atoms separated by up to 5 constraints (-rcon) 1.337 nm
Using two step summing over 3 groups of on average 3.7 ranks
Using 12 MPI processes
Using 5 OpenMP threads per MPI process
On host gpu029.pvt.bridges.psc.edu 2 GPUs auto-selected for this run.
Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:
PP:0,PP:0,PP:1,PP:1
NOTE: GROMACS was configured without NVML support hence it can not exploit
application clocks of the detected Tesla P100-PCIE-16GB GPU to improve performance.
Recompile with the NVML library (compatible with the driver used) or set application clocks manually.
Overriding thread affinity set outside gmx mdrun
Pinning threads with an auto-selected logical core stride of 1
The -resetstep functionality is deprecated, and may be removed in a future version.
System total charge: 0.000
Will do PME sum in reciprocal space for electrostatic interactions.
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- — Thank You — -------- --------
Using a Gaussian width (1/beta) of 0.384195 nm for Ewald
Potential shift: LJ r^-12: -1.122e-01 r^-6: -3.349e-01, Ewald -8.333e-06
Initialized non-bonded Ewald correction tables, spacing: 1.02e-03 size: 1176
Long Range LJ corr.: 3.3098e-04
Generated table with 1168 data points for Ewald.
Tabscale = 500 points/nm
Generated table with 1168 data points for LJ6.
Tabscale = 500 points/nm
Generated table with 1168 data points for LJ12.
Tabscale = 500 points/nm
Generated table with 1168 data points for 1-4 COUL.
Tabscale = 500 points/nm
Generated table with 1168 data points for 1-4 LJ6.
Tabscale = 500 points/nm
Generated table with 1168 data points for 1-4 LJ12.
Tabscale = 500 points/nm
Using GPU 8x8 nonbonded short-range kernels
Using a dual 8x4 pair-list setup updated with dynamic, rolling pruning:
outer list: updated every 100 steps, buffer 0.137 nm, rlist 1.337 nm
inner list: updated every 12 steps, buffer 0.002 nm, rlist 1.202 nm
At tolerance 0.005 kJ/mol/ps per atom, equivalent classical 1x1 list would be:
outer list: updated every 100 steps, buffer 0.290 nm, rlist 1.490 nm
inner list: updated every 12 steps, buffer 0.051 nm, rlist 1.251 nm
Using Lorentz-Berthelot Lennard-Jones combination rule
Initializing Parallel LINear Constraint Solver
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess
P-LINCS: A Parallel Linear Constraint Solver for molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 116-122
-------- -------- — Thank You — -------- --------
The number of constraints is 30135
There are inter charge-group constraints,
will communicate selected coordinates each lincs iteration