Problems with gromacs on new hardware

GROMACS version:2022.3
GROMACS modification: Yes with openMPI 4.1.3 | crystal module | plumed 2.8.1 | cuda 11.8

I am having a bit of a struggle getting gromacs to work on some new hardware we got in. I have to admit, I am a sys admin and I do not use gromacs… That said, I do not believe the issue is with gromacs, rather something I am missing on the system. I am hoping some one could help point me in the right place to help me with this. The error I am running into is when trying to use the GPU’s. What I see, whether I am running in a container, or in a module environment is:

Range checking error (possible bug):
Device ID 0 did not correspond to any of the 0 detected device(s)

command I am using:

apptainer run --nv gromacs2022.3_plumed_gpu.sif gmx_mpi mdrun -s quench -o quench -e quench -c quench -v -pin on -gpu_id 0123 -nb gpu -bonded gpu -ntomp 4 -pinoffset 0 -pinstride 1

System is Rocky Linux release 8.7

nvidia-smi outputs all GPU’s
NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8

|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTX A5000 Off | 00000000:01:00.0 Off | Off |
| 30% 35C P0 61W / 230W | 0MiB / 24564MiB | 0% Default |
| | | N/A |