Loading gromacs/2023.1-debug-build1 Loading requirement: gcc/11.3.0 cuda/11.8.0-gcc11.3.0 ucx/1.14.1-gcc11.3.0-cuda11.8.0 openmpi/4.1.5-gcc11.3.0-cuda11.8.0-ucx1.14.1 :-) GROMACS - gmx mdrun, 2023.1 (-: Executable: /home/x09527a/apps/gromacs/2023.1-debug-build1/bin/gmx_mpi Data prefix: /home/x09527a/apps/gromacs/2023.1-debug-build1 Working dir: ************************************************************ Command line: gmx_mpi mdrun -ntomp 10 -v -deffnm step6.9_equilibration -npme 1 -pme gpu -update gpu -nb gpu -bonded gpu -resethway Back Off! I just backed up step6.9_equilibration.log to ./#step6.9_equilibration.log.10# Compiled SIMD: AVX2_256, but for this host/run AVX_512 might be better (see log). Reading file step6.9_equilibration.tpr, VERSION 2023.1 (single precision) GMX_ENABLE_DIRECT_GPU_COMM environment variable detected, enabling direct GPU communication using GPU-aware MPI. Changing nstlist from 20 to 100, rlist from 1.212 to 1.329 On host cx091 4 GPUs selected for this run. Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node: PP:0,PP:1,PP:2,PP:3 PP tasks will do (non-perturbed) short-ranged and most bonded interactions on the GPU PP task will update and constrain coordinates on the GPU PME tasks will do all aspects on the GPU GPU direct communication will be used between MPI ranks. Using 8 MPI processes Non-default thread affinity set, disabling internal thread affinity Using 10 OpenMP threads per MPI process Back Off! I just backed up step6.9_equilibration.xtc to ./#step6.9_equilibration.xtc.10# Back Off! I just backed up step6.9_equilibration.trr to ./#step6.9_equilibration.trr.10# Back Off! I just backed up step6.9_equilibration.edr to ./#step6.9_equilibration.edr.10# starting mdrun 'Title' 50000 steps, 100.0 ps. step 0 WARNING: Could not free page-locked memory. An unhandled error from a previous CUDA operation was detected. CUDA error #700 (cudaErrorIllegalAddress): an illegal memory access was encountered. ------------------------------------------------------- Program: gmx mdrun, version 2023.1 Source file: src/gromacs/gpu_utils/pmalloc.cu (line 88) MPI rank: 7 (out of 8) Fatal error: cudaFreeHost failedCUDA error #700 (cudaErrorIllegalAddress): an illegal memory access was encountered. For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors ------------------------------------------------------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 7 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. --------------------------------------------------------------------------