Query:Reagrding using the multiple nodes's GPU or CPU to run MD production step in gromacs

GROMACS version:2021
GROMACS modification: Yes/No
Hello everyone, I am using below script to run the md production step. I want to distribute or run the single job of MD in gromacs using muliple node cpu or gpu. but I think the script is running many independent jobs., i m bit confused so please if anyone can guide for this. Thank you!

#!/bin/bash
#SBATCH --job-name=kfdv_md_gpu
#SBATCH --nodes=8
#SBATCH --ntasks-per-node=1 # 1 MPI rank per node
#SBATCH --ntasks=8 # total ranks = nodes × ranks per node
#SBATCH --cpus-per-task=40 # OpenMP threads per rank
#SBATCH --gpus-per-node=1
#SBATCH --partition=gpu
#SBATCH --time=1-00:00:00
#SBATCH --output=md_prod_gpu_%j.out
#SBATCH --error=md_prod_gpu_%j.err

echo “Job started on $(date)”
echo “Nodes: $SLURM_NNODES Tasks: $SLURM_NTASKS”

srun -n $SLURM_NTASKS gmx_mpi mdrun -deffnm md_200_kfdv -ntomp $SLURM_CPUS_PER_TASK

Your script looks ok to me. What makes you think it’s running many independent jobs? If the script starts running, it should produce a number of files, including a md_200_kfdv.log file. This file contains quite a bit of useful information. If you scroll down past the author list and version info, you’ll find a summary of the hardware GROMACS was able to detect for this run. If you scroll further down past the mdp options, it will also report exactly what work it’s doing on how many MPI ranks and devices. If everything is going well, you should see something like

Using 8 MPI ranks
Using 40 OpenMP threads per MPI rank