GROMACS hangs when run from python multiprocess

From what I’ve understood, the blog post is meant for HPC and queue-based systems (Maximizing GROMACS Throughput with Multiple Simulations per GPU Using MPS and MIG | NVIDIA Technical Blog) while I’m running something local.
Also, being new to all of this it’s too much for me to understand it right now.

I am trying to tweak some options described in https://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html but they are really system-specific and I can’t make a “general rule” that works for “general” systems out of them, especially considering how the scripts in my program are written (very basic stuff that runs multiple commands).

On top of that, Python’s parallelization via multiprocess makes it a little more complicated than it already is.
All I can do for now is to try to understand it and test options until I find a middle ground.

Thanks!

HPC or workstation the same applies, as explained above. Think of a set of CPU cores with a GPU a group of resources you need to assign to each mdrun (or rank). For sets of CPU cores you want to avoid the same cores being used by multiple mdruns while for GPUs it can be beneficial to share a GPU across mulitiple simulation.

I’d recommend understanding the basics of resource assignment, the actual numbers then will become obvious.

Cheers,
Szilárd

Experiment with -multidir. If appropriate for your scenario, explore using GROMACS’ -multidir option to run multiple simulations from different input/output files. Understand the proper setup for -multidir to effectively utilize resources. Also evaluate parallelization Impact: Keep in mind that using Python’s multiprocessing module introduces parallelization overhead. Monitor the performance impact and compare it with running simulations individually to determine if the trade-off is acceptable for your specific use case.
found information