Log file opened on Thu Apr 8 11:51:33 2021 Host: jtellez-XPS-13-9380 pid: 27163 rank ID: 0 number of ranks: 1 :-) GROMACS - gmx mdrun, 2018.1 (-: GROMACS is written by: Emile Apol Rossen Apostolov Paul Bauer Herman J.C. Berendsen Par Bjelkmar Aldert van Buuren Rudi van Drunen Anton Feenstra Gerrit Groenhof Aleksei Iupinov Christoph Junghans Anca Hamuraru Vincent Hindriksen Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner Per Larsson Justin A. Lemkul Viveca Lindahl Magnus Lundborg Pieter Meulenhoff Erik Marklund Teemu Murtola Szilard Pall Sander Pronk Roland Schulz Alexey Shvetsov Michael Shirts Alfons Sijbers Peter Tieleman Teemu Virolainen Christian Wennberg Maarten Wolf and the project leaders: Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel Copyright (c) 1991-2000, University of Groningen, The Netherlands. Copyright (c) 2001-2017, The GROMACS development team at Uppsala University, Stockholm University and the Royal Institute of Technology, Sweden. check out http://www.gromacs.org for more information. GROMACS is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. GROMACS: gmx mdrun, version 2018.1 Executable: /usr/bin/gmx Data prefix: /usr Working dir: /media/jtellez/easystore1/hpc/u-eutectic-pim/shell_setup/top_rework Command line: gmx mdrun -v -deffnm ml -table table.xvg GROMACS version: 2018.1 Precision: single Memory model: 64 bit MPI library: thread_mpi OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64) GPU support: disabled SIMD instructions: SSE2 FFT library: fftw-3.3.7-sse2-avx RDTSCP usage: disabled TNG support: enabled Hwloc support: hwloc-1.11.6 Tracing support: disabled Built on: 2018-03-31 17:12:46 Built by: buildd@debian [CMAKE] Build OS/arch: Linux x86_64 Build CPU vendor: Intel Build CPU brand: Westmere E56xx/L56xx/X56xx (Nehalem-C) Build CPU family: 6 Model: 44 Stepping: 1 Build CPU features: aes apic clfsh cmov cx8 cx16 intel lahf mmx msr pcid pclmuldq pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 x2apic C compiler: /usr/bin/cc GNU 7.3.0 C compiler flags: -msse2 -g -O2 -fdebug-prefix-map=/build/gromacs-Fqu8ou/gromacs-2018.1=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast C++ compiler: /usr/bin/c++ GNU 7.3.0 C++ compiler flags: -msse2 -g -O2 -fdebug-prefix-map=/build/gromacs-Fqu8ou/gromacs-2018.1=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -std=c++11 -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast Running on 1 node with total 4 cores, 8 logical cores Hardware detected: CPU info: Vendor: Intel Brand: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz Family: 6 Model: 142 Stepping: 12 Features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic Hardware topology: Full, with devices Sockets, cores, and logical processors: Socket 0: [ 0 4] [ 1 5] [ 2 6] [ 3 7] Numa nodes: Node 0 (16420843520 bytes mem): 0 1 2 3 4 5 6 7 Latency: 0 0 1.00 Caches: L1: 32768 bytes, linesize 64 bytes, assoc. 8, shared 2 ways L2: 262144 bytes, linesize 64 bytes, assoc. 4, shared 2 ways L3: 8388608 bytes, linesize 64 bytes, assoc. 16, shared 8 ways PCI devices: 0000:00:02.0 Id: 8086:3ea0 Class: 0x0300 Numa: 0 0000:02:00.0 Id: 168c:003e Class: 0x0280 Numa: 0 0000:6e:00.0 Id: 1344:5410 Class: 0x0108 Numa: 0 Highest SIMD level requested by all nodes in run: AVX2_256 SIMD instructions selected at compile time: SSE2 This program was compiled for different hardware than you are running on, which could influence performance. The current CPU can measure timings more accurately than the code in gmx mdrun was configured to use. This might affect your simulation speed as accurate timings are needed for load-balancing. Please consider rebuilding gmx mdrun with the GMX_USE_RDTSCP=ON CMake option. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E. Lindahl GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers SoftwareX 1 (2015) pp. 19-25 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R. Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit Bioinformatics 29 (2013) pp. 845-54 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable molecular simulation J. Chem. Theory Comput. 4 (2008) pp. 435-447 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C. Berendsen GROMACS: Fast, Flexible and Free J. Comp. Chem. 26 (2005) pp. 1701-1719 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ E. Lindahl and B. Hess and D. van der Spoel GROMACS 3.0: A package for molecular simulation and trajectory analysis J. Mol. Mod. 7 (2001) pp. 306-317 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ H. J. C. Berendsen, D. van der Spoel and R. van Drunen GROMACS: A message-passing parallel molecular dynamics implementation Comp. Phys. Comm. 91 (1995) pp. 43-56 -------- -------- --- Thank You --- -------- -------- Multiple energy groups is not implemented for GPUs, falling back to the CPU. For better performance, run on the GPU without energy groups and then do gmx mdrun -rerun option on the trajectory with an energy group .tpr file. Input Parameters: integrator = md tinit = 0 dt = 0.0001 nsteps = 50000 init-step = 0 simulation-part = 1 comm-mode = Linear nstcomm = 100 bd-fric = 0 ld-seed = 1196445949 emtol = 1 emstep = 0.01 niter = 40 fcstep = 2 nstcgsteep = 1000 nbfgscorr = 10 rtpi = 0.05 nstxout = 1 nstvout = 1 nstfout = 0 nstlog = 1 nstcalcenergy = 1 nstenergy = 1 nstxout-compressed = 0 compressed-x-precision = 1000 cutoff-scheme = Group nstlist = 1 ns-type = Grid pbc = xyz periodic-molecules = false verlet-buffer-tolerance = 0.005 rlist = 0.8 coulombtype = PME-User coulomb-modifier = None rcoulomb-switch = 0 rcoulomb = 0.8 epsilon-r = 1 epsilon-rf = inf vdw-type = User vdw-modifier = None rvdw-switch = 0 rvdw = 0.8 DispCorr = No table-extension = 1 fourierspacing = 0.12 fourier-nx = 20 fourier-ny = 20 fourier-nz = 20 pme-order = 4 ewald-rtol = 1e-05 ewald-rtol-lj = 0.001 lj-pme-comb-rule = Geometric ewald-geometry = 0 epsilon-surface = 0 implicit-solvent = No gb-algorithm = Still nstgbradii = 1 rgbradii = 1 gb-epsilon-solvent = 80 gb-saltconc = 0 gb-obc-alpha = 1 gb-obc-beta = 0.8 gb-obc-gamma = 4.85 gb-dielectric-offset = 0.009 sa-algorithm = Ace-approximation sa-surface-tension = 2.05016 tcoupl = V-rescale nsttcouple = 1 nh-chain-length = 0 print-nose-hoover-chain-variables = false pcoupl = No pcoupltype = Isotropic nstpcouple = -1 tau-p = 1 compressibility (3x3): compressibility[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} compressibility[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} ref-p (3x3): ref-p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} ref-p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} refcoord-scaling = No posres-com (3): posres-com[0]= 0.00000e+00 posres-com[1]= 0.00000e+00 posres-com[2]= 0.00000e+00 posres-comB (3): posres-comB[0]= 0.00000e+00 posres-comB[1]= 0.00000e+00 posres-comB[2]= 0.00000e+00 QMMM = false QMconstraints = 0 QMMMscheme = 0 MMChargeScaleFactor = 1 qm-opts: ngQM = 0 constraint-algorithm = Lincs continuation = true Shake-SOR = false shake-tol = 0.0001 lincs-order = 4 lincs-iter = 1 lincs-warnangle = 30 nwall = 0 wall-type = 9-3 wall-r-linpot = -1 wall-atomtype[0] = -1 wall-atomtype[1] = -1 wall-density[0] = 0 wall-density[1] = 0 wall-ewald-zfac = 3 pull = false awh = false rotation = false interactiveMD = false disre = No disre-weighting = Conservative disre-mixed = false dr-fc = 1000 dr-tau = 0 nstdisreout = 100 orire-fc = 0 orire-tau = 0 nstorireout = 100 free-energy = no cos-acceleration = 0 deform (3x3): deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} simulated-tempering = false swapcoords = no userint1 = 0 userint2 = 0 userint3 = 0 userint4 = 0 userreal1 = 0 userreal2 = 0 userreal3 = 0 userreal4 = 0 applied-forces: electric-field: x: E0 = 0 omega = 0 t0 = 0 sigma = 0 y: E0 = 0 omega = 0 t0 = 0 sigma = 0 z: E0 = 0 omega = 0 t0 = 0 sigma = 0 grpopts: nrdf: 9 ref-t: 1000 tau-t: 0.1 annealing: No annealing-npoints: 0 acc: 0 0 0 nfreeze: N N N energygrp-flags[ 0]: 2 0 energygrp-flags[ 1]: 0 2 Using 1 MPI thread Pinning threads with an auto-selected logical core stride of 2 NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be removed in a future release when 'verlet' supports all interaction forms. System total charge: 0.000 Will do PME sum in reciprocal space for electrostatic interactions. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen A smooth particle mesh Ewald method J. Chem. Phys. 103 (1995) pp. 8577-8592 -------- -------- --- Thank You --- -------- -------- Using a Gaussian width (1/beta) of 0.25613 nm for Ewald Potential shift: LJ r^-12: 0.000e+00 r^-6: 0.000e+00, Ewald -0.000e+00 Initialized non-bonded Ewald correction tables, spacing: 8.35e-04 size: 2158 Table routines are used for coulomb: true Table routines are used for vdw: true Cut-off's: NS: 0.8 Coulomb: 0.8 LJ: 0.8 Read user tables from table.xvg with 6001 data points. Tabscale = 1999 points/nm Generated table with 3598 data points for Ewald-User. Tabscale = 1999 points/nm Read user tables from table_UUCLX_UUCLX.xvg with 6001 data points. Tabscale = 1999 points/nm Generated table with 3598 data points for Ewald-User. Tabscale = 1999 points/nm Read user tables from table_CUCLX_CUCLX.xvg with 6001 data points. Tabscale = 1999 points/nm Generated table with 3598 data points for Ewald-User. Tabscale = 1999 points/nm Intra-simulation communication will occur every 1 steps. Center of mass motion removal mode is Linear We have the following groups for center of mass motion removal: 0: rest ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ G. Bussi, D. Donadio and M. Parrinello Canonical sampling through velocity rescaling J. Chem. Phys. 126 (2007) pp. 014101 -------- -------- --- Thank You --- -------- --------