GMX_MPI running

Hi everyone!
I have some problems with running gmx_mpi on more then one nodes. I have 4 nodes to run with 64 threads, but when i use slurm script with “mpirun gmx_mpi …”, gromacs starts the process on 4 nodes with… 32 cores in total, when 4 nodes will be with 32x4 cores and 64x4 threads.

Thanks everyone for replying!

OpenMPI version 4.0.2

In .log file i see this. Gromacs using 4 nodes, but only with 16 threads, whats looks weird… And using 64 mpi threads by default. Can i use all cores and threads of 4 or at least 2 nodes? It will be 128 threads.

Slurm script that i used to run job:

#!/bin/bash -l

#SBATCH --job-name=GROMACS
#SBATCH --time=2400:00:00
#SBATCH --partition=general
##SBATCH --partition=fatMemNode2Tb
##SBATCH --partition=gpunode
#SBATCH -e output_err_%j
#SBATCH -o output_%j
##SBATCH --ntasks=8
#SBATCH --nodes=4
#SBATCH -n 64

export PATH=/home/vddayneko/soft/gromacs2023.3_mpi/bin:$PATH

##export GMX_OPENMP_MAX_THREADS=256

##export OMP_NUM_THREADS=32

gmx_mpi grompp -f em.mdp -c complex_solv_ions.gro -p topol.top -o em.tpr

sleep 5

mpirun gmx_mpi mdrun -deffnm em -v

sleep 5

By the way i have another gromacs build with OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 256).