:-) GROMACS - mdrun_mpi, 2020.4 (-: GROMACS is written by: Emile Apol Rossen Apostolov Paul Bauer Herman J.C. Berendsen Par Bjelkmar Christian Blau Viacheslav Bolnykh Kevin Boyd Aldert van Buuren Rudi van Drunen Anton Feenstra Alan Gray Gerrit Groenhof Anca Hamuraru Vincent Hindriksen M. Eric Irrgang Aleksei Iupinov Christoph Junghans Joe Jordan Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner Per Larsson Justin A. Lemkul Viveca Lindahl Magnus Lundborg Erik Marklund Pascal Merz Pieter Meulenhoff Teemu Murtola Szilard Pall Sander Pronk Roland Schulz Michael Shirts Alexey Shvetsov Alfons Sijbers Peter Tieleman Jon Vincent Teemu Virolainen Christian Wennberg Maarten Wolf Artem Zhmurov and the project leaders: Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel Copyright (c) 1991-2000, University of Groningen, The Netherlands. Copyright (c) 2001-2019, The GROMACS development team at Uppsala University, Stockholm University and the Royal Institute of Technology, Sweden. check out http://www.gromacs.org for more information. GROMACS is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. GROMACS: mdrun_mpi, version 2020.4 Executable: /opt/apps/gromacs/2020.4/gcc/8.4.0/cuda/10.1.243/openmpi/4.0.3/bin/mdrun_mpi Data prefix: /opt/apps/gromacs/2020.4/gcc/8.4.0/cuda/10.1.243 Working dir: /data/homezvol1/calmasri/WRKY/WRKY_polyAT/axial/test Command line: mdrun_mpi -ntomp 20 -deffnm pull -pf pullf.xvg -px pullx.xvg Compiled SIMD: AVX2_128, but for this host/run AVX_512 might be better (see log). Reading file pull.tpr, VERSION 2020.4 (single precision) Changing nstlist from 20 to 100, rlist from 1.418 to 1.53 On host hpc3-gpu-16-02 2 GPUs selected for this run. Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node: PP:0,PP:1 PP tasks will do (non-perturbed) short-ranged and most bonded interactions on the GPU PP task will update and constrain coordinates on the CPU Using 2 MPI processes Non-default thread affinity set, disabling internal thread affinity Using 20 OpenMP threads per MPI process NOTE: Your choice of number of MPI ranks and amount of resources results in using 20 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank. WARNING: This run will generate roughly 8307 Mb of data NOTE: DLB will not turn on during the first phase of PME tuning starting mdrun 'Protein in water' 50000000 steps, 100000.0 ps. step 1: One or more water molecules can not be settled. Check for bad contacts and/or reduce the timestep if appropriate. step 1: One or more water molecules can not be settled. Check for bad contacts and/or reduce the timestep if appropriate. Wrote pdb files with previous and current coordinates Wrote pdb files with previous and current coordinates Warning: Triclinic box is too skewed. Warning: Triclinic box is too skewed. Box (3x3): Box (3x3): Box[ 0]={-7.72854e+01, Box[ 0]={-7.72854e+01, 0.00000e+00, 0.00000e+00} 0.00000e+00, 0.00000e+00} Box[ 1]={-0.00000e+00, -4.07764e+01, -0.00000e+00 Box[ 1]={-0.00000e+00, } Box[ 2]={-4.07764e+01, -0.00000e+00} Box[ 2]={-0.00000e+00, -0.00000e+00-0.00000e+00, -0.00000e+00, -3.28580e+01} Can not fix pbc. Warning: Triclinic box is too skewed. Box (3x3): , -3.28580e+01} Can not fix pbc. Box[ 0]={-7.72854e+01Warning: Triclinic box is too skewed. Box (3x3): Box[ 0]={-7.72854e+01, 0.00000e+00, 0.00000e+00, 0.00000e+00} Box[ 1]={-0.00000e+00, , 0.00000e+00} -4.07764e+01, -0.00000e+00} Box[ 1]={-0.00000e+00, -4.07764e+01, -0.00000e+00} Box[ 2]={-0.00000e+00, -0.00000e+00, -3.28580e+01} Can not fix pbc. Box[ 2]={-0.00000e+00, -0.00000e+00, -3.28580e+01} Can not fix pbc. [hpc3-gpu-16-02:981 :0:1022] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xfffffffe06375640) [hpc3-gpu-16-02:00981] *** Process received signal *** [hpc3-gpu-16-02:00981] Signal: Segmentation fault (11) [hpc3-gpu-16-02:00981] Signal code: Address not mapped (1) [hpc3-gpu-16-02:00981] Failing at address: 0xfffffffe06375640 -------------------------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun noticed that process rank 0 with PID 981 on node hpc3-gpu-16-02 exited on signal 11 (Segmentation fault). --------------------------------------------------------------------------