Problem facing to install gromacs on intel arc a 770 gpu

GROMACS version: 2024.3

I am facing problem installing gromacs with compilation of intel arc a770 graphics card…is the graphics card not supported by gromacs?

once it was installed properly bt when i checked gmx-version the gpu support was disabled.actually i am new in linax and also new in using md simulation…

Hi!

A770 is supported by GROMACS 2024.

When gmx -version reports that GPU support is disabled, that means that GROMACS was compiled without GPU support. Have you followed the installation instructions?

thanks for replying. I have compiled properly
can you please tell me why this is showing

NOTE: DLB can now turn on, when beneficial
^C

Received the INT signal, stopping within 100 steps

Dynamic load balancing report:
DLB was off during the run due to low measured imbalance.
Average load imbalance: 1.0%.
The balanceable part of the MD step is 51%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 0.5%.

           Core t (s)   Wall t (s)        (%)
   Time:     5645.006      282.251     2000.0
             (ns/day)    (hour/ns)

Performance: 17.020 1.410

previously it showed

On host Ubuntu 2 GPUs selected for this run.
Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:
PP:0,PP:0,PP:1,PP:1

i am using a i5 13500 processor which have 14 cores and 20 thread
and intel arc a770 16GB GPU. isn’t the 17 ns/day speed very less??

This depends a lot on the size of the system (1 million atom system would be much slower than 1 thousand atoms) and how you launch GROMACS.

Sharing the full md.log would help: it contains the build and launch information at the top and performance counters at the bottom.

That said, here are some general suggestions:

  1. Do you have PME in your system? With multiple ranks, the PME calculation can fall back to the CPU, which is likely to kill performance. Try adding -pme gpu flag to your mdrun call. It will also limit you to only one GPU, but it will still be better than running PME on CPU.
  2. Running multiple ranks per GPU is rarely useful. With two GPUs it could be better to use two ranks (especially if you do not have PME). So, try using -ntmpi 2 to launch only two ranks. If you have PME, you do -ntmpi 2 -pme gpu -npme 1.
  3. However, if you are running without GPU-aware MPI (which you do unless you set the things for that), you can be better off using only one GPU per run: the overhead of coordinating the work between two GPUs could be too significant. So, the -ntmpi 1 -pme auto could be best.

If you want to use two GPUs in an efficient manner, you need to build GROMACS with -DGMX_MPI=ON (in addition to the other CMake flags). Then launch with GMX_ENABLE_DIRECT_GPU_COMM=1 I_MPI_OFFLOAD=1 ONEAPI_DEVICE_SELECTOR=level_zero:gpu mpirun -np 2 gmx_mpi mdrun ... instead of gmx mdrun.

Other flags worth testing, in combination with any of the above, are -nstlist 200 and -bonded gpu. Getting good performance from mdrun - GROMACS 2024.3 documentation could offer some more suggestions.

thanks for helping
I have tried as you said . I also tried some other commands

gmx mdrun -deffnm MD -ntmpi 1 -nb gpu -pme gpu -bonded gpu -update gpu

when i ran with this command i got a lot of speed ,about 140ns/day
but it shows some warnings …I dont know what are they…
can you please tell if there are any problems with these warning for the result analysis

FFT WARNING: INPUT_STRIDES and OUTPUT_STRIDES are deprecated: please use FWD_STRIDES and BWD_STRIDES, instead.
this is the warnig

No worries, that’s because you’re using a new Intel oneAPI version and they changed a few things internally. This warning is totally harmless in terms of correctness and performance; just annoying.

wow!
its great then.

I am very very grateful to you for replying and helping within a very short time.
I’m also enjoying using GROMACS.

1 Like