Replica Exchange: Mdrun parameters

GROMACS version: 2020
GROMACS modification: No
Hello dear all;
It is the first time that I want to submit a Replica Exchange MD (REMD) job on the the university supercomputer, In the tutorials for REMD the final command looks like:
mpirun -np nn gmx mpi mdrun -v -deffnm remd -multi m -replex x
However on the supercluster I am working with this command does not work,
After doing extensive research I found the mpi enabeled installation of GROMACS on the university cluster and it seems that the command should look like

srun mdrun_avx512_mpi -ntomp $(( 2 * $SLURM_CPUS_PER_TASK )) -s topol.tpr

My First question is whether this difference-- i.e. not using mpirun -np nn-- is important? Or it does not matter?

The results of gmx --version is:
Precision: single
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: CUDA
SIMD instructions: AVX_256
FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: hwloc-2.2.0
Tracing support: disabled
C compiler: /opt/apps/software/GCCcore/9.3.0/bin/gcc GNU 9.3.0
C compiler flags: -mavx -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler: /opt/apps/software/GCCcore/9.3.0/bin/g++ GNU 9.3.0
C++ compiler flags: -mavx -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler: /opt/apps/software/CUDA/11.0.182-GCC-9.3.0/bin/nvcc nvcc: NVIDIA ® Cuda compiler driver;Copyright © 2005-2020 NVIDIA Corporation;Built on Wed_May__6_19:09:25_PDT_2020;Cuda compilation tools, release 11.0, V11.0.167;Build cuda_11.0_bu.TC445_37.28358933_0
CUDA compiler flags:-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_37,code=compute_37;-gencode;arch=compute_70,code=compute_70;-use_fast_math;;-mavx -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver: 0.0
CUDA runtime: N/A

here a post that can be useful

Best regards

1 Like

Thanks dear Alessandra for your reply
It turns out that “srun” does exactly what “mpirun” program does
“mpirun -np nn gmx” is equivalent to the following slurm script:

#SBATCH --ntasks=nn
srun gmx