Dry Martini vesicle simulation keeps crashing

GROMACS version: 2018.2/2020.3
GROMACS modification: No

Hi everyone! I have used CHARMM-GUI to set up a 20 nm vesicle with Dry Martini. The stochastic dynamics simulations ran well up to step6.6 (NPT) but keeps crashing for step7 (NVT). I have reduced the time-step from default of 40 fs to 10 fs thinking it is a stability problem and still only managed to get about 60 ns completed for step7 production run as the simulation keep crashing.
The vesicle looks ok, without distortions or breakage, but somehow simulation cannot extend beyond 60 ns.

Below is the mdp file I’ve used:
integrator = sd
tinit = 0.0
dt = 0.010
nsteps = 5000000

nstcomm = 1
nstxout = 0
nstvout = 0
nstfout = 0
nstlog = 5000
nstenergy = 5000
nstxout-compressed = 50000
compressed-x-precision = 1000

cutoff-scheme = Verlet
verlet-buffer-tolerance = 0.005

epsilon_r = 15
coulombtype = reaction-field
rcoulomb = 1.1
vdw_type = cutoff
vdw-modifier = Potential-shift-verlet
rvdw = 1.1

tc-grps = system
tau-t = 4.0
ref-t = 310

; Pressure coupling:
pcoupl = no
pcoupltype = semiisotropic
tau-p = 4.0
compressibility = 3e-4 0.0
ref-p = 0.0 0.0

; GENERATE VELOCITIES FOR STARTUP RUN:
gen_vel = no

I don’t see any posts on Dry Martini simulations for vesicles or bilayers.
Does anyone have experience running Dry Martini vesicles?
Do I need to change some parameters in the generated step7 mdp file?
Or should I use Gromacs version 4.6.x as used in the Dry Martini paper?

Any suggestion is greatly appreciated!

Cheers,
Choon-Peng Chng, Ph.D.
Nanyang Technological University, Singapore

What does the log file say when it crashes? Or what does the mdrun output say?

Thanks for your response.

The GROMACS mdrun log file shows no error messages.
The log file from the job submission system on our cluster says:
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 49718 RUNNING AT hpc-n018
= EXIT CODE: 139
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES

An alternative scenario is where I get a segmentation fault when running on my laptop.
So it’s likely that seg fault was also triggered on the cluster.

Actually I installed Gromacs v.4.6.7 yesterday which was the version used in the Dry Martini paper.
I managed to run step7 production run (NVT) for about 25 ns over 2 hours on my 4-core laptop.
Yet to try on more cores and longer run times as getting our cluster admin to install the old version.
So perhaps something changed in the GROMACS stochastic dynamics code between 4.6.x and 2018.x and later?