GROMACS version:2023
GROMACS modification: No
Hello everyone!
Recently i’ve moved to Gromacs 2023 and really happy about it’s performance. But occasionally, it happens to segfault on simulations in N.-H. & P.-R. NPT of one of my systems. It warns with “Pressure scaling more than 1%”, despite system is seems to be well equilibrated in Berendsen NPT, and after 5-6 ns crashes with segfault. I’ve tried to grompp and mdrun from exact same files on other machine with Gromacs 2021.2, and faced no errors and warnings on 10x simulations length. Also, the cell dimensions simulated with older Gromacs seem to be less fluctuant.
Maybe someone had the same problem and knows how to solve it? For now, the only option i have is to downgrade to older versions of Gromacs.
I dont use any of experimental env. variables, run gmx mdrun as it is without specifying anything more than a simulation name (since automatic mdrun performance choice works best on my hardware).
Hardware and System
GPU NVIDIA GeForce 2080Ti
CPU AMD Ryzen 7 3700X
16 Gb RAM
Ubuntu 22.04
Here is my gmx --version output
GROMACS version: 2023
Precision: mixed
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: CUDA
NB cluster size: 8
SIMD instructions: AVX2_256
CPU FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128
GPU FFT library: cuFFT
Multi-GPU FFT: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 11.3.0
C compiler flags: -fexcess-precision=fast -funroll-all-loops -mavx2 -mfma -Wno-missing-field-initializers -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 11.3.0
C++ compiler flags: -fexcess-precision=fast -funroll-all-loops -mavx2 -mfma -Wno-missing-field-initializers -Wno-cast-function-type-strict -fopenmp -O3 -DNDEBUG
BLAS library: External - detected on the system
LAPACK library: External - detected on the system
CUDA compiler: /usr/local/cuda-12.0/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2023 NVIDIA Corporation;Built on Fri_Jan__6_16:45:21_PST_2023;Cuda compilation tools, release 12.0, V12.0.140;Build cuda_12.0.r12.0/compiler.32267302_0
CUDA compiler flags:-std=c++17;--generate-code=arch=compute_50,code=sm_50;--generate-code=arch=compute_52,code=sm_52;--generate-code=arch=compute_60,code=sm_60;--generate-code=arch=compute_61,code=sm_61;--generate-code=arch=compute_70,code=sm_70;--generate-code=arch=compute_75,code=sm_75;--generate-code=arch=compute_80,code=sm_80;--generate-code=arch=compute_86,code=sm_86;--generate-code=arch=compute_89,code=sm_89;--generate-code=arch=compute_90,code=sm_90;-Wno-deprecated-gpu-targets;--generate-code=arch=compute_53,code=sm_53;--generate-code=arch=compute_80,code=sm_80;-use_fast_math;-Xptxas;-warn-double-usage;-Xptxas;-Werror;-D_FORCE_INLINES;-fexcess-precision=fast -funroll-all-loops -mavx2 -mfma -Wno-missing-field-initializers -Wno-cast-function-type-strict -fopenmp -O3 -DNDEBUG
CUDA driver: 12.0
CUDA runtime: 12.0
Nvidia Driver versions
NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0
mdp file
integrator = md
dt = 0.002
nsteps = 500000000 ;1000ns
nstlog = 1000
nstxout-compressed = 10000
nstvout = 10000
nstfout = 10000
nstcalcenergy = 100
nstenergy = 1000
;
cutoff-scheme = Verlet
nstlist = 20
rlist = 1.2
coulombtype = pme
rcoulomb = 1.2
vdwtype = Cut-off
vdw-modifier = Force-switch
rvdw_switch = 1.0
rvdw = 1.2
;
tcoupl = Nose-Hoover
tc_grps = MEMB SOLV
tau_t = 1.0 1.0
ref_t = 310 310
;
pcoupl = Parrinello-Rahman
pcoupltype = semiisotropic
tau_p = 5.0
compressibility = 4.5e-5 4.5e-5
ref_p = 1.0 1.0
;
constraints = h-bonds
constraint_algorithm = LINCS
continuation = yes
;
nstcomm = 100
comm_mode = linear
comm_grps = MEMB SOLV
;
refcoord_scaling = com
Thank you!