Nvt.mdp killed in 7 min

GROMACS version:2022+cp2k interface

basically, my system is small but I couldn’t start nvt job. Can you I help me, what I am doing wrong?
Thank you in advance

I am taking that error :

Command line:
gmx_cp2k mdrun -s nvt.tpr -v -deffnm nvt

Compiled SIMD: AVX_256, but for this host/run AVX2_256 might be better (see
log).
Reading file nvt.tpr, VERSION 2022 (double precision)
Changing nstlist from 10 to 50, rlist from 1.2 to 1.302

Using 4 MPI processes

Non-default thread affinity set, disabling internal thread affinity

Using 1 OpenMP thread per MPI process

starting mdrun ‘Protein in water’
50000 steps, 100.0 ps.
step 0
[barbun8:362337:0] Caught signal 11 (Segmentation fault)
==== backtrace ====
2 0x00000000000686ec mxm_handle_error() /var/tmp/OFED_topdir/BUILD/mxm-3.5.3093/src/mxm/util/debug/debug.c:641
3 0x0000000000068c3c mxm_error_signal_handler() /var/tmp/OFED_topdir/BUILD/mxm-3.5.3093/src/mxm/util/debug/debug.c:616
4 0x0000000000035250 killpg() ??:0
5 0x0000000000b03558 _Z14spread_on_gridPK9gmx_pme_tP11PmeAtomCommPK10pmegrids_tbbPdbi._omp_fn.0() pme_spread.cpp:0
6 0x000000000000dacf GOMP_parallel() /truba/sw/src/gcc/gcc-7-20170326/hamsi_install/x86_64-pc-linux-gnu/libgomp/…/…/./…/libgomp/parallel.c:168
7 0x0000000000b04f42 _Z14spread_on_gridPK9gmx_pme_tP11PmeAtomCommPK10pmegrids_tbbPdbi() ??:0
8 0x0000000000add6f3 _Z10gmx_pme_doP9gmx_pme_tN3gmx8ArrayRefIKNS1_11BasicVectorIdEEEENS2_IS4_EENS2_IKdEES9_S9_S9_S9_S9_PA3_S8_PK9t_commreciiP6t_nrnbP13gmx_wallcyclePA3_dSK_PdSL_ddSL_SL_RKNS1_12StepWorkloadE() ??:0
9 0x0000000001186a47 _ZN24CpuPpLongRangeNonbondeds9calculateEP9gmx_pme_tPK9t_commrecN3gmx8ArrayRefIKNS5_11BasicVectorIdEEEEPNS5_15ForceWithVirialEP14gmx_enerdata_tPA3_KdNS6_ISF_EESA_RKNS5_12StepWorkloadERK22DDBalanceRegionHandler() ??:0
10 0x00000000012c638a _Z8do_forceP8_IO_FILEPK9t_commrecPK14gmx_multisim_tRK10t_inputrecPN3gmx3AwhEP10gmx_enfrotPNSA_10ImdSessionEP6pull_tlP6t_nrnbP13gmx_wallcyclePK14gmx_localtop_tPA3_KdNSA_19ArrayRefWithPaddingINSA_11BasicVectorIdEEEEPK9history_tPNSA_16ForceBuffersViewEPA3_dPK9t_mdatomsP14gmx_enerdata_tNSA_8ArrayRefISQ_EEP10t_forcerecPNSA_21MdrunScheduleWorkloadEPNSA_19VirtualSitesHandlerEPddP9gmx_edsamP24CpuPpLongRangeNonbondedsiRK22DDBalanceRegionHandler() ??:0
11 0x00000000011d9fc7 _ZN3gmx15LegacySimulator5do_mdEv() ??:0
12 0x00000000011d52bd _ZN3gmx15LegacySimulator3runEv() ??:0
13 0x0000000000b26a3d _ZN3gmx8Mdrunner8mdrunnerEv() ??:0
14 0x0000000000585bb9 _ZN3gmx9gmx_mdrunEP19ompi_communicator_tRK13gmx_hw_info_tiPPc() ??:0
15 0x0000000000585ee4 _ZN3gmx9gmx_mdrunEiPPc() ??:0
16 0x000000000063a960 _ZN3gmx24CommandLineModuleManager3runEiPPc() ??:0
17 0x000000000051665c main() ??:0
18 0x0000000000021b35 __libc_start_main() ??:0
19 0x00000000005824d5 _start() ??:0


Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.


mpirun noticed that process rank 0 with PID 362337 on node barbun8 exited on signal 11 (Segmentation fault).

That is my nvt.mdp file:

; md-qmmm-nvt.mdp - used as input into grompp to generate egfp-qmmm-nvt.tpr
integrator = md ; MD using leap-frog integrator
nsteps = 50000 ; 2 * 50000 = 100 ps
dt = 0.002 ; 2 fs

; Set output frequency to each step
nstxout = 1000 ; Coordinates to trr
nstvout = 1000 ; Coordinates to trr
nstlog = 1000 ; Energies to md.log
nstcalcenergy = 1000 ; Energies to ener.edr
nstenergy = 1000 ; Energies to ener.edr

; Bond parameters
continuation = no ; first dynamics run
constraint_algorithm = lincs ; holonomic constraints
constraints = none ; bonds to H are constrained
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy

; Set cut-offs
nstlist = 50
rlist = 1.302
coulombtype = PME
coulomb-modifier = Potential-shift-Verlet
rcoulomb-switch = 1.0
rcoulomb = 1.2
vdwtype = Cut-off
vdw-modifier = Force-switch
rvdw-switch = 1.0
rvdw = 1.2

;Temperature coupling options
tcoupl = v-rescale
nsttcouple = 10
tc-grps = System
tau-t = 0.1
ref-t = 300

; CP2K QMMM parameters
qmmm-cp2k-active = true ; Activate QMMM MdModule
qmmm-cp2k-qmgroup = QMatoms; Index group of QM atoms
qmmm-cp2k-qmmethod = PBE ; Method to use
qmmm-cp2k-qmcharge = 0 ; Charge of QM system
qmmm-cp2k-qmmultiplicity = 1 ; Multiplicity of QM system

that is my slurm for starting job

#!/bin/bash
#SBATCH -p mid2
#SBATCH -J nvt-2-1
#SBATCH --nodes=1
#SBATCH --tasks-per-node=4
#SBATCH --cpus-per-task=1
#SBATCH --time=5-00:00:00
#SBATCH --output=slurm-%j.out
#SBATCH --error=slurm-%j.err

echo “SLURM_NODELIST $SLURM_NODELIST”
echo “NUMBER OF CORES $SLURM_NTASKS”

export OMP_NUM_THREADS=1
export OMPI_MCA_btl_openib_allow_ib=1

module purge
module load centos7.9/comp/gcc/7
module load centos7.9/lib/openmpi/4.1.1-gcc-7

export PATH=/truba/home/gromacs-cp2k/build/bin:$PATH

mpirun -np 4 /truba/home/gromacs-cp2k/build/bin/gmx_cp2k mdrun -s nvt.tpr -v -deffnm nvt

exit

“”"“Non-default thread affinity set, disabling internal thread affinity”"" how can I fix this? and where is coming from?

The issue may be that you didn’t specify the amount of memory required for the job in the slurm script, and so slurm isn’t providing sufficient memory resources for running the simulation, causing the segmentation fault. Try adding something like: #SBATCH --mem-per-cpu=2G to your slurm script (or whatever is appropriate for the hardware you are running on).