:-) GROMACS - gmx mdrun, 2020.1 (-: GROMACS is written by: Emile Apol Rossen Apostolov Paul Bauer Herman J.C. Berendsen Par Bjelkmar Christian Blau Viacheslav Bolnykh Kevin Boyd Aldert van Buuren Rudi van Drunen Anton Feenstra Alan Gray Gerrit Groenhof Anca Hamuraru Vincent Hindriksen M. Eric Irrgang Aleksei Iupinov Christoph Junghans Joe Jordan Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner Per Larsson Justin A. Lemkul Viveca Lindahl Magnus Lundborg Erik Marklund Pascal Merz Pieter Meulenhoff Teemu Murtola Szilard Pall Sander Pronk Roland Schulz Michael Shirts Alexey Shvetsov Alfons Sijbers Peter Tieleman Jon Vincent Teemu Virolainen Christian Wennberg Maarten Wolf Artem Zhmurov and the project leaders: Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel Copyright (c) 1991-2000, University of Groningen, The Netherlands. Copyright (c) 2001-2019, The GROMACS development team at Uppsala University, Stockholm University and the Royal Institute of Technology, Sweden. check out http://www.gromacs.org for more information. GROMACS is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. GROMACS: gmx mdrun, version 2020.1 Executable: /home/adi865j/Softwares/gromacs-2020.1/gromacs-2020.1_built/bin/gmx_mpi Data prefix: /home/adi865j/Softwares/gromacs-2020.1/gromacs-2020.1_built Working dir: /home/adi865j/G-4x/27L2R_Cu_2x_K/production/500ns_LIG_sph_restr/500ns_LIG_sph_restr_rep1 Process ID: 108478 Command line: gmx_mpi mdrun -v -deffnm 2_NODES -s md_MULTINODE_test.tpr GROMACS version: 2020.1 Verified release checksum is 5cde61b9d46b24153ba84f499c996612640b965eff9a218f8f5e561f94ff4e43 Precision: single Memory model: 64 bit MPI library: MPI OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64) GPU support: CUDA SIMD instructions: AVX_512 FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG support: enabled Hwloc support: disabled Tracing support: disabled C compiler: /opt/rh/devtoolset-7/root/usr/bin/cc GNU 7.3.1 C compiler flags: -mavx512f -mfma -pthread -fexcess-precision=fast -funroll-all-loops C++ compiler: /opt/rh/devtoolset-7/root/usr/bin/c++ GNU 7.3.1 C++ compiler flags: -mavx512f -mfma -pthread -fexcess-precision=fast -funroll-all-loops -fopenmp CUDA compiler: /cluster/nvidia/cuda/11.0.1/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2020 NVIDIA Corporation;Built on Wed_May__6_19:09:25_PDT_2020;Cuda compilation tools, release 11.0, V11.0.167;Build cuda_11.0_bu.TC445_37.28358933_0 CUDA compiler flags:-std=c++14;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_35,code=compute_35;-gencode;arch=compute_50,code=compute_50;-gencode;arch=compute_52,code=compute_52;-gencode;arch=compute_60,code=compute_60;-gencode;arch=compute_61,code=compute_61;-gencode;arch=compute_70,code=compute_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;;-mavx512f -mfma -pthread -fexcess-precision=fast -funroll-all-loops -fopenmp CUDA driver: 0.0 CUDA runtime: N/A Running on 2 nodes with total 96 cores, 96 logical cores (GPU detection deactivated) Cores per node: 48 Logical cores per node: 48 Compatible GPUs per node: 0 Hardware detected on host node001.service (the node of MPI rank 0): CPU info: Vendor: Intel Brand: Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz Family: 6 Model: 85 Stepping: 4 Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic Number of AVX-512 FMA units: 1 (AVX2 is faster w/o 2 AVX-512 FMA units) Hardware topology: Only logical processor count Highest SIMD level requested by all nodes in run: AVX2_256 SIMD instructions selected at compile time: AVX_512 This program was compiled for different hardware than you are running on, which could influence performance.This host supports AVX-512, but since it only has 1 AVX-512FMA unit, it would be faster to use AVX2 instead. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E. Lindahl GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers SoftwareX 1 (2015) pp. 19-25 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R. Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit Bioinformatics 29 (2013) pp. 845-54 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable molecular simulation J. Chem. Theory Comput. 4 (2008) pp. 435-447 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C. Berendsen GROMACS: Fast, Flexible and Free J. Comp. Chem. 26 (2005) pp. 1701-1719 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ E. Lindahl and B. Hess and D. van der Spoel GROMACS 3.0: A package for molecular simulation and trajectory analysis J. Mol. Mod. 7 (2001) pp. 306-317 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ H. J. C. Berendsen, D. van der Spoel and R. van Drunen GROMACS: A message-passing parallel molecular dynamics implementation Comp. Phys. Comm. 91 (1995) pp. 43-56 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE CITE THE DOI FOR THIS VERSION OF GROMACS ++++ https://doi.org/10.5281/zenodo.3685919 -------- -------- --- Thank You --- -------- -------- The number of OpenMP threads was set by environment variable OMP_NUM_THREADS to 1 Input Parameters: integrator = md-vv tinit = 0 dt = 0.002 nsteps = 250000000 init-step = 0 simulation-part = 1 comm-mode = Linear nstcomm = 1 bd-fric = 0 ld-seed = -170741534 emtol = 10 emstep = 0.01 niter = 20 fcstep = 0 nstcgsteep = 1000 nbfgscorr = 10 rtpi = 0.05 nstxout = 0 nstvout = 50000 nstfout = 50000 nstlog = 1000 nstcalcenergy = 1 nstenergy = 1000 nstxout-compressed = 50000 compressed-x-precision = 1000 cutoff-scheme = Verlet nstlist = 20 pbc = xyz periodic-molecules = false verlet-buffer-tolerance = 0.005 rlist = 1.223 coulombtype = PME coulomb-modifier = Potential-shift rcoulomb-switch = 0 rcoulomb = 1.2 epsilon-r = 1 epsilon-rf = inf vdw-type = Cut-off vdw-modifier = Force-switch rvdw-switch = 1 rvdw = 1.2 DispCorr = No table-extension = 1 fourierspacing = 0.12 fourier-nx = 48 fourier-ny = 48 fourier-nz = 48 pme-order = 4 ewald-rtol = 1e-05 ewald-rtol-lj = 0.001 lj-pme-comb-rule = Geometric ewald-geometry = 0 epsilon-surface = 0 tcoupl = Andersen-massive nsttcouple = 1 nh-chain-length = 0 print-nose-hoover-chain-variables = false pcoupl = Parrinello-Rahman pcoupltype = Isotropic nstpcouple = 20 tau-p = 5 compressibility (3x3): compressibility[ 0]={ 4.50000e-05, 0.00000e+00, 0.00000e+00} compressibility[ 1]={ 0.00000e+00, 4.50000e-05, 0.00000e+00} compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 4.50000e-05} ref-p (3x3): ref-p[ 0]={ 1.00000e+00, 0.00000e+00, 0.00000e+00} ref-p[ 1]={ 0.00000e+00, 1.00000e+00, 0.00000e+00} ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 1.00000e+00} refcoord-scaling = COM posres-com (3): posres-com[0]= 5.02994e-01 posres-com[1]= 7.50674e-01 posres-com[2]= 5.79533e-01 posres-comB (3): posres-comB[0]= 5.02994e-01 posres-comB[1]= 7.50674e-01 posres-comB[2]= 5.79533e-01 QMMM = false QMconstraints = 0 QMMMscheme = 0 MMChargeScaleFactor = 1 qm-opts: ngQM = 0 constraint-algorithm = Lincs continuation = false Shake-SOR = false shake-tol = 0.0001 lincs-order = 4 lincs-iter = 1 lincs-warnangle = 30 nwall = 0 wall-type = 9-3 wall-r-linpot = -1 wall-atomtype[0] = -1 wall-atomtype[1] = -1 wall-density[0] = 0 wall-density[1] = 0 wall-ewald-zfac = 3 pull = false awh = false rotation = false interactiveMD = false disre = No disre-weighting = Conservative disre-mixed = false dr-fc = 1000 dr-tau = 0 nstdisreout = 100 orire-fc = 0 orire-tau = 0 nstorireout = 100 free-energy = no cos-acceleration = 0 deform (3x3): deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} simulated-tempering = false swapcoords = no userint1 = 0 userint2 = 0 userint3 = 0 userint4 = 0 userreal1 = 0 userreal2 = 0 userreal3 = 0 userreal4 = 0 applied-forces: electric-field: x: E0 = 0 omega = 0 t0 = 0 sigma = 0 y: E0 = 0 omega = 0 t0 = 0 sigma = 0 z: E0 = 0 omega = 0 t0 = 0 sigma = 0 density-guided-simulation: active = false group = protein similarity-measure = inner-product atom-spreading-weight = unity force-constant = 1e+09 gaussian-transform-spreading-width = 0.2 gaussian-transform-spreading-range-in-multiples-of-width = 4 reference-density-filename = reference.mrc nst = 1 normalize-densities = true adaptive-force-scaling = false adaptive-force-scaling-time-constant = 4 grpopts: nrdf: 1937.81 29127.2 ref-t: 300 300 tau-t: 0.1 0.1 annealing: No No annealing-npoints: 0 0 acc: 0 0 0 nfreeze: N N N energygrp-flags[ 0]: 0 Changing nstlist from 20 to 80, rlist from 1.223 to 1.337 Initializing Domain Decomposition on 48 ranks Dynamic load balancing: locked Minimum cell size due to atom displacement: 1.464 nm Initial maximum distances in bonded interactions: two-body bonded interactions: 0.519 nm, Exclusion, atoms 568 598 multi-body bonded interactions: 0.410 nm, Proper Dih., atoms 516 523 Minimum cell size due to bonded interactions: 0.451 nm Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.219 nm Estimated maximum distance required for P-LINCS: 0.219 nm Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25 Guess for relative PME load: 0.19 Will use 36 particle-particle and 12 PME only ranks This is a guess, check the performance at the end of the log file Using 12 separate PME ranks, as guessed by mdrun Optimizing the DD grid for 36 cells with a minimum initial size of 1.830 nm The maximum allowed number of cells is: X 2 Y 2 Z 2 ------------------------------------------------------- Program: gmx mdrun, version 2020.1 Source file: src/gromacs/domdec/domdec.cpp (line 2277) MPI rank: 0 (out of 48) Fatal error: There is no domain decomposition for 36 ranks that is compatible with the given box and a minimum cell size of 1.83 nm Change the number of ranks or mdrun option -rdd or -dds Look in the log file for details on the domain decomposition For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors -------------------------------------------------------