I’m currently working on my thesis about checkpointing and restoring containers. As part of this, I’m analyzing how well it works with different amounts of data in RAM.
To evaluate this under realistic conditions, I’m looking for open-source applications that consume a lot of memory and GROMACS seems like a great candidate.
However, I should mention that I have almost no background in molecular dynamics or GROMACS itself. For my purposes, it doesn’t really matter what kind of simulation is being run. The important part is that it uses a realistic and memory-intensive workload.
I’ve already tried some tutorials and tested the benchmarks from the Max Planck Institute (A free GROMACS benchmark set), but they don’t stress the memory enough for what I need.
Does anyone happen to have (or know where I could find) any input files or benchmark systems that require a large amount of RAM during runtime? I have up to 100 GB of RAM and 16 CPU cores available.
As you’ve noticed, GROMACS often has a surprisingly small memory footprint for its computational intensity.
Memory usage is, primarily, determined by the number of atoms in the system. Most benchmarks out there (and, indeed, most use cases) have at most a couple of million atoms, and their memory usage is far below the 100 GB you want.
If the benchPEP (12 million atoms) from Max Planck set is too small, you can use grappa-46M.tar.bz2 benchmark. After downloading and extracting the archive, use gmx grompp -f rf.mdp -o grappa-46M-rf.tpr to prepare the binary input for the simulation. You can use pme.mdp for even larger memory footprint, but I suspect that would be too much.
Be prepared for very low performance. A 46-million atom system is massive, and running it on 16 CPU cores will result in an extremely low simulation rate (ns/day). For context, we used it for testing GROMACS on hundreds of GPUs.
More elaborate scenarios could be concocted, but since you are interested in checkpointing the simulation and not in getting scientific results from it, grappa-46M would hopefully be fine.