Gromacs and cuda mismatch lead to job failure in cluster

I am using a cluster for simulation. cluster has following version installed.

Gromacs_gpu 2025.3

CUDA/ 12.8

While running simulation, I got an error, and the job crashed at nvt step.

here is the commmand line:

gmx_mpi grompp -f em.mdp -c ions.gro -p topol.top -o em.tpr -maxwarn 10
gmx_mpi mdrun -v -deffnm em
gmx_mpi grompp -f nvt.mdp -c em.gro -r em.gro -p topol.top -n index.ndx -o nvt.tpr -maxwarn 10
gmx_mpi mdrun -v -deffnm nvt
gmx_mpi grompp -f npt.mdp -c nvt.gro -r nvt.gro -t nvt.cpt -p topol.top -n index.ndx -o npt.tpr -maxwarn 10
gmx_mpi mdrun -v -deffnm npt
gmx_mpi grompp -f md.mdp -c npt.gro -t npt.cpt -p topol.top -n index.ndx -o md_0_1.tpr -maxwarn 10

I got error:

=================================
/var/spool/slurm/slurmd/job00232/slurm_script: line 16: 50292 Floating point exception(core dumped) gmx_mpi mdrun -v -deffnm nvt
:-) GROMACS - gmx grompp, 2025.3 (-:

Executable: /apps/codes/openmpi/gromacs/gpu/2025.3/bin/gmx_mpi
Data prefix: /apps/codes/openmpi/gromacs/gpu/2025.3
Working dir: /home/user01/nainsy/S2_CAG/Simulations/CAG40/MD2/gromacs

This error indicates a technical problem.

Please give me some suggestions.

Thankyou.

Hi! It’s not easy to see what’s happening from what you posted. The message indicates that a FPE occured during mdrun, but the next line is the start of a grompp output. Can you show the full output of the mdrun command that failed?