Has anyone successfully used GROMACS 2023 with 5070Ti GPUs?

GROMACS version: 2023.5
GROMACS modification: first tried plain GROMACS, then modifiedgmxManageNvccConfig.cmake

I am trying to build an executable of GROMACS 2023.5 that can run on
NVIDIA GeForce RTX 5070Ti cards, but so far all of my attempts failed. If anyone managed to do so, I am interested in the settings.

Here’s what I did so far:

First used an executable built with CUDA 12.2 and GCC 12.2

When executing mdrun, I get:
WARNING: An error occurred while sanity checking device #0. An unhandled error from a previous CUDA operation was detected. CUDA error #209 (cudaErrorNoKernelImageForDevice): no kernel image is available for execution on the device.

And the simulation continues to run, but using only the CPU.

Then I tried to fix JIT compilation

Believing the reason for this is this issue: “CUDA forward-compatibility broken”

I went ahead and back-ported the suggested fix to gmxManageNvccConfig.cmake in lines 225-226:

Old:

gmx_add_nvcc_flag_if_supported(GMX_CUDA_NVCC_GENCODE_FLAGS NVCC_HAS_GENCODE_COMPUTE_53 --generate-code=arch=compute_53,code=sm_53)

gmx_add_nvcc_flag_if_supported(GMX_CUDA_NVCC_GENCODE_FLAGS NVCC_HAS_GENCODE_COMPUTE_80 --generate-code=arch=compute_80,code=sm_80)

Now:

gmx_add_nvcc_flag_if_supported(GMX_CUDA_NVCC_GENCODE_FLAGS NVCC_HAS_GENCODE_COMPUTE_53 --generate-code=arch=compute_53,code=compute_53)

gmx_add_nvcc_flag_if_supported(GMX_CUDA_NVCC_GENCODE_FLAGS NVCC_HAS_GENCODE_COMPUTE_80 --generate-code=arch=compute_80,code=compute_80)

Now after compiling I get the following error:

1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
PP:0,PME:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
PME tasks will do all aspects on the GPU
Using 1 MPI thread
Using 16 OpenMP threads

starting mdrun ‘Great Red Oystrich Makes All Chemists Sane’
10000 steps,     20.0 ps.


Program:     gmx mdrun, version 2023.5-dev-2025-10-14–unknown
Source file: src/gromacs/gpu_utils/cudautils.cuh (line 292)
Function:    launchGpuKernel<NBAtomDataGpu, NBParamGpu, Nbnxm::gpu_plist, int>(void ()(NBAtomDataGpu, NBParamGpu, Nbnxm::gpu_plist, int), const KernelLaunchConfig&, const DeviceStream&, CommandEvent, const char*, const std::array<void*, 4>&)::<lambda()>

Assertion failed:
Condition: stat == cudaSuccess
GPU kernel (k_pruneonly) failed to launch: CUDA error #218
(cudaErrorInvalidPtx): a PTX JIT compilation failed.

Same with CUDA 12.8 btw.

Third, I included Blackwell architectures as well

To circumvent having to JIT compile for Blackwell, I tried to add cmake
code to specifically build for Blackwell as well. Using CUDA 12.8, GCC 12.2.

In gmxManageNvccConfig.cmake, adding:

gmx_add_nvcc_flag_if_supported(GMX_CUDA_NVCC_GENCODE_FLAGS NVCC_HAS_GENCODE_COMPUTE_AND_SM_100 --generate-code=arch=compute_100,code=sm_100)
gmx_add_nvcc_flag_if_supported(GMX_CUDA_NVCC_GENCODE_FLAGS NVCC_HAS_GENCODE_COMPUTE_AND_SM_120 --generate-code=arch=compute_120,code=sm_120)

Now I get the following compile-time error:

ptxas error : Value of threads per SM for entry _Z23nbnxn_kernel_prune_cudaILb1EEv13NBAtomDataGpu10NBParamGpuN5Nbnxm9gpu_plistEi is out of range. .minnctapersm will be ignored
ptxas error : Value of threads per SM for entry _Z23nbnxn_kernel_prune_cudaILb0EEv13NBAtomDataGpu10NBParamGpuN5Nbnxm9gpu_plistEi is out of range. .minnctapersm will be ignored
ptxas fatal : Ptx assembly aborted due to errors
CMake Error at libgromacs_generated_nbnxm_cuda_kernel_pruneonly.cu.o.Release.cmake:280 (message):
Error generating file
/local/ckutzne/git-gromacs-2023/build/threads/src/gromacs/CMakeFiles/libgromacs.dir/nbnxm/cuda/./libgromacs_generated_nbnxm_cuda_kernel_pruneonly.cu.o

Am I missing something?

Any help appreciated.

Hello!

No… I managed to get it working with a 5070 Ti, but it is the last 2025 version. As far as I know, 2023 version is not compatible with series 5, and the same is true for 2024 (please, somebody correct me if I am wrong). To make it run properly you will also need, or at least that was my case, driver version 580.82.07 if running the software under Ubuntu 24.04 (plus the latest Cuda Toolkit, 13). I downloaded everything from here (link says CUDA Toolkit 12.1 but it is actually 13):

and I used the deb (network) option.

If you follow the instructions there, get the driver also installed and make sure it is active. Then it is just a question of compiling the software. I documented the process I followed, so tell me if you need additional info. My machine is currently running it simultaneously in a RTX4090 plus a RTX5070Ti. One simulation per card.

Cheers,

Hi,

As @iiciieii noted, GROMACS 2023 does not support Blackwell GPUs.

If you cannot use a newer GROMACS version and really must use 2023.x (which is not maintained anymore), you can try applying the patch manually.

GROMACS 2024.6 should work fine: GROMACS 2024.6 release notes - GROMACS 2024.6 documentation

Hi,

Many thanks for pointing me towards the patch, which solved the problem! I can now run GROMACS 2023 on all kinds of GPUs, including Blackwells.

To explain why this is important:

We have a rather heterogeneous cluster of various nodes with many different GPU models, and many GROMACS jobs from different projects and users are running 24/7 on these nodes.

Users typically select a recent GROMACS version at the outset of their project, but then
understandably stick with it for the lifetime of the project, which can last as long as
several years in some cases. Therefore, we also need to keep older versions of GROMACS on the cluster.

In this scenario, where a variety of GROMACS versions runs on a zoo of GPU models,
it is advantageous if each job can run on any of the nodes: Users then do not need to exclude specific nodes in their job files, and it allows for higher throughput, as jobs can run on the next free node regardless of GPU type.

Let’s see if I can get GROMACS 2022 running on the Blackwells as well :)