Thanks for sharing. The additional logs haven’t helped much (all fine there), but made me look again at your original make check
logs :)
[==========] Running 37 tests from 2 test suites.
[----------] Global test environment set-up.
[----------] 1 test from VirtualSiteVelocityTest
[ RUN ] VirtualSiteVelocityTest.ReferenceIsCorrect
[ OK ] VirtualSiteVelocityTest.ReferenceIsCorrect (0 ms)
[----------] 1 test from VirtualSiteVelocityTest (0 ms total)
[----------] 36 tests from VelocitiesConformToExpectations/VirtualSiteTest
[==========] Running 37 tests from 2 test suites.
[----------] Global test environment set-up.
[----------] 1 test from VirtualSiteVelocityTest
[ RUN ] VirtualSiteVelocityTest.ReferenceIsCorrect
[ OK ] VirtualSiteVelocityTest.ReferenceIsCorrect (0 ms)
[----------] 1 test from VirtualSiteVelocityTest (0 ms total)
So, we see the output from the two instances of the same unit test running at the same time, instead of a single instance using two processes.
This suggests that you have some misconfiguration in your MPI installation, which leads to miscommunication between GROMACS and your MPI launcher. Then, instead of the test launching two processes that communicate and work in tandem, it launches two processes, each of which thinks that it’s the only one, and so they overwrite each other’s files.
This is relatively harmless: tests are using incorrect launcher, but if you run the production simulations correctly, there would not be any ill effects.
Still, you might want to get the tests fixed. In this case, do grep MPIEXEC_EXECUTABLE: CMakeCache.txt
in your build directory, then check that the executable matches the library GROMACS is linked with.
Your GROMACS is using MPICH 4.1 (based on the CMake log). So, if your the command above gives, for example, MPIEXEC_EXECUTABLE:FILEPATH=/usr/bin/mpiexec
, you can run /usr/bin/mpiexec --version
and see whether it is MPICH (the other likely alternative is OpenMPI). There are other subtle differences in MPI configuration, but this is the most likely one.
Finally, I see that you’re running on a laptop. Is there any way you are building with -DGMX_MPI=ON
instead of the default -DGMX_MPI=OFF -DGMX_THREAD_MPI=ON
? The former is required when you want to scale to several compute nodes or if you want to run -multidir
, but with one GPU you’re usually better off just using threadMPI. This would also avoid all the MPI-related problems.