We’ve recently secured some funding for building a few servers/workstations. These will primarily be used for gromacs simulations - mainly free energy computations; so we’re more interested in throughput with multiple simulations of the same system rather than obtaining a single long trajectory. Our typical system sizes range from 500K to 1M atoms.
We’re looking for some advice on hardware configurations. I have read the 2018 More Bang for Your Buck paper (which has been immensely useful!). I just have a couple of further questions.
- Currently we’re looking at getting a AMD threadripper 2970WX. Would we instead benefit getting multi socket machines with epyc processors? In price to performance is there any competitive Intel offerings?
- How many GPUs per machine should we get? It seems like from the bang for your buck paper 8 cores per gpu (with pme offloaded) is optimal. So with something like a 2970WX should we get 3 Gpus? Or would we benefit from getting more gpus?
- Is it anyway beneficial to get nvlink connectors?
Any further advice on the topic would be greatly appreciated.
GROMACS version: 2020.2
GROMACS modification: No