:-) GROMACS - gmx mdrun, 2024 (-: Copyright 1991-2024 The GROMACS Authors. GROMACS is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. Current GROMACS contributors: Mark Abraham Andrey Alekseenko Vladimir Basov Cathrine Bergh Eliane Briand Ania Brown Mahesh Doijade Giacomo Fiorin Stefan Fleischmann Sergey Gorelov Gilles Gouaillardet Alan Gray M. Eric Irrgang Farzaneh Jalalypour Joe Jordan Carsten Kutzner Justin A. Lemkul Magnus Lundborg Pascal Merz Vedran Miletic Dmitry Morozov Julien Nabet Szilard Pall Andrea Pasquadibisceglie Michele Pellegrino Hubert Santuz Roland Schulz Tatiana Shugaeva Alexey Shvetsov Philip Turner Alessandra Villa Sebastian Wingbermuehle Previous GROMACS contributors: Emile Apol Rossen Apostolov James Barnett Paul Bauer Herman J.C. Berendsen Par Bjelkmar Christian Blau Viacheslav Bolnykh Kevin Boyd Aldert van Buuren Carlo Camilloni Rudi van Drunen Anton Feenstra Oliver Fleetwood Vytas Gapsys Gaurav Garg Gerrit Groenhof Bert de Groot Anca Hamuraru Vincent Hindriksen Victor Holanda Aleksei Iupinov Christoph Junghans Prashanth Kanduri Dimitrios Karkoulis Peter Kasson Sebastian Kehl Sebastian Keller Jiri Kraus Per Larsson Viveca Lindahl Erik Marklund Pieter Meulenhoff Teemu Murtola Sander Pronk Michael Shirts Alfons Sijbers Balint Soproni David van der Spoel Peter Tieleman Carsten Uphoff Jon Vincent Teemu Virolainen Christian Wennberg Maarten Wolf Artem Zhmurov Coordinated by the GROMACS project leaders: Berk Hess and Erik Lindahl GROMACS: gmx mdrun, version 2024 Executable: .\bin\gmx.exe Data prefix: . Working dir: c:\Users\USER\Desktop\MD\MzSimulation\direct Process ID: 1620 Command line: gmx mdrun -v -deffnm em -nb cpu GROMACS version: 2024 Precision: mixed Memory model: 64 bit MPI library: thread_mpi OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128) GPU support: disabled SIMD instructions: AVX2_256 CPU FFT library: fftw3 GPU FFT library: none Multi-GPU FFT: none RDTSCP usage: enabled TNG support: enabled Hwloc support: disabled Tracing support: disabled C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.39.33519/bin/Hostx64/x64/cl.exe MSVC 19.39.33520.0 C compiler flags: /arch:AVX2 /wd4800 /wd4355 /wd4996 /wd4305 /wd4244 /wd4101 /wd4267 /wd4090 /wd4068 /O2 /Ob2 /DNDEBUG C++ compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.39.33519/bin/Hostx64/x64/cl.exe MSVC 19.39.33520.0 C++ compiler flags: /arch:AVX2 /wd4800 /wd4355 /wd4996 /wd4305 /wd4244 /wd4267 /wd4068 /permissive- /analyze /analyze:stacksize 70000 /wd6001 /wd6011 /wd6053 /wd6054 /wd6385 /wd6386 /wd6387 /wd28199 /wd6239 /wd6240 /wd6294 /wd6326 /wd28020 /wd6330 /wd6993 /wd6031 /wd6244 /wd6246 SHELL:-openmp /O2 /Ob2 /DNDEBUG BLAS library: Internal LAPACK library: Internal Running on 1 node with total 0 processing units Hardware detected on host DESKTOP-JQ3LQV1: CPU info: Vendor: Intel Brand: Intel(R) Core(TM) i3-6100 CPU @ 3.70GHz Family: 6 Model: 94 Stepping: 3 Features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 Hardware topology: Only logical processor count CPU limit set by OS: -1 Recommended max number of threads: 4 ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E. Lindahl GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers SoftwareX 1 (2015) pp. 19-25 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R. Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit Bioinformatics 29 (2013) pp. 845-54 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable molecular simulation J. Chem. Theory Comput. 4 (2008) pp. 435-447 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C. Berendsen GROMACS: Fast, Flexible and Free J. Comp. Chem. 26 (2005) pp. 1701-1719 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ E. Lindahl and B. Hess and D. van der Spoel GROMACS 3.0: A package for molecular simulation and trajectory analysis J. Mol. Mod. 7 (2001) pp. 306-317 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ H. J. C. Berendsen, D. van der Spoel and R. van Drunen GROMACS: A message-passing parallel molecular dynamics implementation Comp. Phys. Comm. 91 (1995) pp. 43-56 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE CITE THE DOI FOR THIS VERSION OF GROMACS ++++ https://doi.org/10.5281/zenodo.10589643 -------- -------- --- Thank You --- -------- -------- Input Parameters: integrator = steep tinit = 0 dt = 0.001 nsteps = 50000 init-step = 0 simulation-part = 1 mts = false mass-repartition-factor = 1 comm-mode = Linear nstcomm = 100 bd-fric = 0 ld-seed = -4491409 emtol = 1000 emstep = 0.01 niter = 20 fcstep = 0 nstcgsteep = 1000 nbfgscorr = 10 rtpi = 0.05 nstxout = 0 nstvout = 0 nstfout = 0 nstlog = 1000 nstcalcenergy = 100 nstenergy = 1000 nstxout-compressed = 0 compressed-x-precision = 1000 cutoff-scheme = Verlet nstlist = 1 pbc = xyz periodic-molecules = false verlet-buffer-tolerance = 0.005 verlet-buffer-pressure-tolerance = 0.5 rlist = 1.2 coulombtype = PME coulomb-modifier = Potential-shift rcoulomb-switch = 0 rcoulomb = 1.2 epsilon-r = 1 epsilon-rf = inf vdw-type = Cut-off vdw-modifier = Force-switch rvdw-switch = 1 rvdw = 1.2 DispCorr = No table-extension = 1 fourierspacing = 0.12 fourier-nx = 72 fourier-ny = 72 fourier-nz = 72 pme-order = 4 ewald-rtol = 1e-05 ewald-rtol-lj = 0.001 lj-pme-comb-rule = Geometric ewald-geometry = 3d epsilon-surface = 0 ensemble-temperature-setting = not available tcoupl = No nsttcouple = -1 nh-chain-length = 0 print-nose-hoover-chain-variables = false pcoupl = No refcoord-scaling = No posres-com (3): posres-com[0]= 0.00000e+00 posres-com[1]= 0.00000e+00 posres-com[2]= 0.00000e+00 posres-comB (3): posres-comB[0]= 0.00000e+00 posres-comB[1]= 0.00000e+00 posres-comB[2]= 0.00000e+00 QMMM = false qm-opts: ngQM = 0 constraint-algorithm = Lincs continuation = false Shake-SOR = false shake-tol = 0.0001 lincs-order = 4 lincs-iter = 1 lincs-warnangle = 30 nwall = 0 wall-type = 9-3 wall-r-linpot = -1 wall-atomtype[0] = -1 wall-atomtype[1] = -1 wall-density[0] = 0 wall-density[1] = 0 wall-ewald-zfac = 3 pull = false awh = false rotation = false interactiveMD = false disre = No disre-weighting = Conservative disre-mixed = false dr-fc = 1000 dr-tau = 0 nstdisreout = 100 orire-fc = 0 orire-tau = 0 nstorireout = 100 free-energy = no cos-acceleration = 0 deform (3x3): deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} simulated-tempering = false swapcoords = no userint1 = 0 userint2 = 0 userint3 = 0 userint4 = 0 userreal1 = 0 userreal2 = 0 userreal3 = 0 userreal4 = 0 applied-forces: electric-field: x: E0 = 0 omega = 0 t0 = 0 sigma = 0 y: E0 = 0 omega = 0 t0 = 0 sigma = 0 z: E0 = 0 omega = 0 t0 = 0 sigma = 0 density-guided-simulation: active = false group = protein similarity-measure = inner-product atom-spreading-weight = unity force-constant = 1e+09 gaussian-transform-spreading-width = 0.2 gaussian-transform-spreading-range-in-multiples-of-width = 4 reference-density-filename = reference.mrc nst = 1 normalize-densities = true adaptive-force-scaling = false adaptive-force-scaling-time-constant = 4 shift-vector = transformation-matrix = qmmm-cp2k: active = false qmgroup = System qmmethod = PBE qmfilenames = qmcharge = 0 qmmultiplicity = 1 colvars: active = false configfile = seed = -1 grpopts: nrdf: 97863 ref-t: 0 tau-t: 0 annealing: No annealing-npoints: 0 acc: 0 0 0 nfreeze: N N N energygrp-flags[ 0]: 0 Initializing Domain Decomposition on 1 ranks NOTE: disabling dynamic load balancing as it is only supported with dynamics, not with integrator 'steep'. Dynamic load balancing: off Using update groups, nr 17632, average size 2.7 atoms, max. radius 0.084 nm Minimum cell size due to atom displacement: 0.000 nm Initial maximum distances in bonded interactions: two-body bonded interactions: 0.419 nm, LJ-14, atoms 1896 1903 multi-body bonded interactions: 0.477 nm, CMAP Dih., atoms 40 56 Minimum cell size due to bonded interactions: 0.524 nm Using 0 separate PME ranks because: there are too few total ranks for efficient splitting Optimizing the DD grid for 1 cells with a minimum initial size of 0.524 nm The maximum allowed number of cells is: X 14 Y 14 Z 14 Domain decomposition grid 1 x 1 x 1, separate PME ranks 0 PME domain decomposition: 1 x 1 x 1 The initial number of communication pulses is: The initial domain decomposition cell size is: The maximum allowed distance for atom groups involved in interactions is: non-bonded interactions 1.368 nm two-body bonded interactions (-rdd) 1.368 nm multi-body bonded interactions (-rdd) 1.368 nm Using 1 MPI thread Using 4 OpenMP threads System total charge: 0.000 Will do PME sum in reciprocal space for electrostatic interactions. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen A smooth particle mesh Ewald method J. Chem. Phys. 103 (1995) pp. 8577-8592 -------- -------- --- Thank You --- -------- -------- Using a Gaussian width (1/beta) of 0.384195 nm for Ewald Potential shift: LJ r^-12: -2.648e-01 r^-6: -5.349e-01, Ewald -8.333e-06 Initialized non-bonded Coulomb Ewald tables, spacing: 1.02e-03 size: 1176 Generated table with 1100 data points for 1-4 COUL. Tabscale = 500 points/nm Generated table with 1100 data points for 1-4 LJ6. Tabscale = 500 points/nm Generated table with 1100 data points for 1-4 LJ12. Tabscale = 500 points/nm Using SIMD4xM 4x8 nonbonded short-range kernels Using a 4x8 pair-list setup: updated every 1 steps, buffer 0.000 nm, rlist 1.200 nm Removing pbc first time Linking all bonded interactions to atoms Pinning threads with an auto-selected logical cpu stride of 1 ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ S. Miyamoto and P. A. Kollman SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid Water Models J. Comp. Chem. 13 (1992) pp. 952-962 -------- -------- --- Thank You --- -------- -------- Note that activating steepest-descent energy minimization via the integrator .mdp option and the command gmx mdrun may be available in a different form in a future version of GROMACS, e.g. gmx minimize and an .mdp option. Initiating Steepest Descents Atom distribution over 1 domains: av 47612 stddev 0 min 47612 max 47612 Started Steepest Descents on rank 0 Sat Mar 2 05:00:41 2024 Steepest Descents: Tolerance (Fmax) = 1.00000e+03 Number of steps = 50000