RX 6800 (RX 6800 XT) or RTX 3080 (RTX 3090)?

GROMACS version: 2020.2
GROMACS modification: Yes/No
Here post your question

Are there any test results of RX 6800 XT vs RTX 3080 for Gromacs?


We do not have comparisons yet. Also note that while the raw performance of the RX 6800 / 6800 XT is very impressive, a couple of factors will limit the effective performance GROMACS can currently extract; e.g. OpenCL AMD support has some feature limitations (no GPU resident runs supported), and some of the AMD libraries are lacking (e.g. there is no fast OpenCL FFT library).

As recently released AMD GPUs are getting competitive again and could in principle match or surpass the performance of NVIDIA cards, as well as given the AMD GPU-based major academic HPC installations announced, we plan to improve AMD GPU support during the next year.


Hi, is there any benchmark anyone can provide for RTX3080/RTX3080Ti. If possible, comparison with GTX1080/GTX1080Ti and Tesla P100/V100/RTX is nice to have too.

Hi, I upgraded Titan Xp to RTX 3090. it increased the simulation speed by about 65% for a system of 50K atoms.

Thank you for your response. In theory going by the number of theoretical 32-bit flops, the speedup should be closer to 3x compare to Titan Xp. Thanks again, that helps me decide whether to upgrade or stick to Pascal cards.

Note that theoretical FLOPs often won’t reflect the real-world performance gain in applications (unless the respective code is massively FLOP-bound or the gains are purely from more/faster cores).
While some of the GROMACS kernels are compute-bound, none are strongly FP32-bound, and some are more memory-bound. Hence, such architectural changes like those in consumer Ampere GPUs won’t translate to similar factor in real-world gains. Conversely, other architectural changes can have large impact, a good example is the Turing architecture which had modest peak FLOP improvement over Pascal, but had other useful with significant performance impact; seehttps://arxiv.org/pdf/1903.05918.pdf#figure.caption.4.