Creating a Gromacs + Plumed Container Using Intel Compilers

GROMACS version: 2018.8 (but I think my question isn’t version specific)
GROMACS modification: No

I managed to succeed in building a container for a researcher using a Dockerfile nearly identical to this one by Whitelab but substituting 2019 for 2018.8 and Plumed for 2.5.

I’ve been asked to see if if it’s possible to speed up this container while running on our cluster using Intel compilers and libraries (we have an Intel Parallels license). This is outside of my experience, so I’m wondering how if there are any resources out there that might be helpful for fine tuning Gromacs in this way or any advice members here can give.

I’m more of a Java/JS/Python person so the finer details of how Intel compilers and libraries work is all to new to me, but I am willing to learn.

I take it no one has experience with this?

It is unlikely that you will see much benefit from using Intel compilers over GNU. I can’t comment whether Plumed computation does get faster (which will only be relevant when Plumed takes a significant amount of runtime relative to the native GROMACS code which can sometimes be the case).

To compiler with Intel toolchain follow the install guide instructions.

Note: If you do want to use the Intel compilers, I can strongly recommend using the beta version of their new oneAPI tools. For standard x86 hardware you might not get a huge speedup, but the oneAPI version has the huge advantage that Intel distributes it freely, so you can likely just build on an existing public container without having to worry about licenses!

1 Like

This is my first time hearing about oneAPI. This tool might be very useful for the work we beyond just working with Gromacs. Thanks for the recommendation.