Errors while running CTest (New post: 31/07/2025) Detailed logs attached

GROMACS version: gromacs-2025.2
GROMACS modification: Yes/No

I am trying to install gromacs in my laptop i7+ and 16 GB RAM with GTX 1050.

The cmake configuration is:

¨¨¨

CMAKE_PREFIX_PATH=/usr/local/cuda/bin/ cmake .. -DGMX_GPU=CUDA -DGMX_MPI=ON -DGMX_BUILD_OWN_FFTW=ON -DCMAKE_CUDA_ARCHITECTURES=“native” -DCMAKE_C_COMPILER=gcc-11 -DCMAKE_CXX_COMPILER=g+±11 -DCMAKE_C_FLAGS=“-fno-lto” -DCMAKE_CXX_FLAGS=“-fno-lto” -DCMAKE_SHARED_LINKER_FLAGS=-fno-lto -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON

¨¨¨

These logs are attached:

cmake.log (13.2 KB)

make_check.log (2.3 MB)

make.log (4.4 MB)

What am I doing wrong?

Best regards,
David

Hi again,

The only type of errors I see in the log is:

-------------------------------------------------------
Program:     gmxapi-mpi-test, version 2025.2
Source file: src/gromacs/mdlib/mdoutf.cpp (line 494)
Function:    void write_checkpoint(const char*, gmx_bool, FILE*, const t_commrec*, int*, int, IntegrationAlgorithm, int, gmx_bool, LambdaWeightCalculation, int64_t, double, t_state*, ObservablesHistory*, const gmx::MDModulesNotifiers&, gmx::WriteCheckpointDataHolder*, bool, MPI_Comm)

System I/O error:
Cannot rename checkpoint file; maybe you are out of disk space?

For more information and tips for troubleshooting, please check the GROMACS
website at https://manual.gromacs.org/current/user-guide/run-time-errors.html
-------------------------------------------------------

So, the first question is: are you out of disk space (in /home and /tmp)?

If not, checking things like permissions on the build directory (especially if you have ran any of the cmake or make commands with sudo) and looking into dmesg for any kind of filesystem or permission errors would be the next steps. Also, please share what OS and filesystem(s) you’re using (and any other info of that kind that you think could be relevant).

dmesg.log (103.4 KB)

df_h.log (436 Bytes)

lshw.log (29.7 KB)

nvidia-smi.log (1.7 KB)

Hi,
I have checked space in disk and it looks ok (df_h.log).
The cmake, make and make check commands I execute them without sudo.
My OS is ubuntu 24.04.2 and I am using ext4 as filesystem.
RAM=16GB.
I hope this information would be useful.
Thank you,
David

Thanks for sharing. The additional logs haven’t helped much (all fine there), but made me look again at your original make check logs :)

[==========] Running 37 tests from 2 test suites.
[----------] Global test environment set-up.
[----------] 1 test from VirtualSiteVelocityTest
[ RUN      ] VirtualSiteVelocityTest.ReferenceIsCorrect
[       OK ] VirtualSiteVelocityTest.ReferenceIsCorrect (0 ms)
[----------] 1 test from VirtualSiteVelocityTest (0 ms total)

[----------] 36 tests from VelocitiesConformToExpectations/VirtualSiteTest
[==========] Running 37 tests from 2 test suites.
[----------] Global test environment set-up.
[----------] 1 test from VirtualSiteVelocityTest
[ RUN      ] VirtualSiteVelocityTest.ReferenceIsCorrect
[       OK ] VirtualSiteVelocityTest.ReferenceIsCorrect (0 ms)
[----------] 1 test from VirtualSiteVelocityTest (0 ms total)

So, we see the output from the two instances of the same unit test running at the same time, instead of a single instance using two processes.

This suggests that you have some misconfiguration in your MPI installation, which leads to miscommunication between GROMACS and your MPI launcher. Then, instead of the test launching two processes that communicate and work in tandem, it launches two processes, each of which thinks that it’s the only one, and so they overwrite each other’s files.

This is relatively harmless: tests are using incorrect launcher, but if you run the production simulations correctly, there would not be any ill effects.

Still, you might want to get the tests fixed. In this case, do grep MPIEXEC_EXECUTABLE: CMakeCache.txt in your build directory, then check that the executable matches the library GROMACS is linked with.

Your GROMACS is using MPICH 4.1 (based on the CMake log). So, if your the command above gives, for example, MPIEXEC_EXECUTABLE:FILEPATH=/usr/bin/mpiexec, you can run /usr/bin/mpiexec --version and see whether it is MPICH (the other likely alternative is OpenMPI). There are other subtle differences in MPI configuration, but this is the most likely one.

Finally, I see that you’re running on a laptop. Is there any way you are building with -DGMX_MPI=ON instead of the default -DGMX_MPI=OFF -DGMX_THREAD_MPI=ON? The former is required when you want to scale to several compute nodes or if you want to run -multidir, but with one GPU you’re usually better off just using threadMPI. This would also avoid all the MPI-related problems.

Hi!
I ran this cmake command:
¨¨¨
CMAKE_PREFIX_PATH=/usr/local/cuda/bin/ cmake .. -DGMX_GPU=CUDA -DGMX_MPI=OFF -DGMX_BUILD_OWN_FFTW=ON
¨¨¨
And it worked properly, I have installed gromacs on my laptop with CUDA!
Thank you so much for your time and patience.
Best regards,
David

1 Like