GROMACS version: 2019.3
GROMACS modification: No
I’m running some simulations using rcoulomb = 1.1 nm and coulombtype = PME, and when I start mdrun I get the messages as shown below. This isn’t always exactly the same, however the PME tuning always occurs. Can anyone explain what is happening here, including:
- does the increase in rcoulomb mean that the potential shape has changed?
- what does “xxx M-cycles” mean?
I’ve trawled through the whole GROMACS documentation, and I’ve found some explanations, but they’re not very clear to me.
Is the only option to avoid this tuning, and keep rcoulomb to the specified value to use mdrun -tunepme no? Would I get different behaviour if I use mdrun -tunepme no vs letting the PME tuning take place automatically?
Thanks,
Robert
1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
PP:0,PME:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PME tasks will do all aspects on the GPU
starting mdrun ‘POS’
5000 steps, 125.0 ps.
step 100: timed with pme grid 48 108 72, coulomb cutoff 1.100: 24.6 M-cycles
step 200: timed with pme grid 44 100 64, coulomb cutoff 1.163: 20.1 M-cycles
step 300: timed with pme grid 36 84 56, coulomb cutoff 1.381: 19.6 M-cycles
step 400: timed with pme grid 32 80 48, coulomb cutoff 1.554: 19.7 M-cycles
step 500: timed with pme grid 28 72 42, coulomb cutoff 1.776: 19.5 M-cycles
step 500: the maximum allowed grid scaling limits the PME load balancing to a coulomb cut-off of 1.812
step 600: timed with pme grid 28 64 42, coulomb cutoff 1.812: 19.6 M-cycles
step 700: timed with pme grid 28 72 42, coulomb cutoff 1.776: 25.4 M-cycles
step 800: timed with pme grid 32 72 44, coulomb cutoff 1.691: 19.5 M-cycles
step 900: timed with pme grid 32 72 48, coulomb cutoff 1.611: 24.7 M-cycles
step 1000: timed with pme grid 32 80 48, coulomb cutoff 1.554: 19.2 M-cycles
step 1100: timed with pme grid 36 80 52, coulomb cutoff 1.450: 23.4 M-cycles
step 1200: timed with pme grid 36 84 52, coulomb cutoff 1.431: 22.1 M-cycles
step 1300: timed with pme grid 36 84 56, coulomb cutoff 1.381: 20.1 M-cycles
step 1400: timed with pme grid 40 96 56, coulomb cutoff 1.329: 19.5 M-cycles
step 1500: timed with pme grid 40 96 60, coulomb cutoff 1.243: 19.9 M-cycles
step 1600: timed with pme grid 42 96 64, coulomb cutoff 1.208: 19.7 M-cycles
step 1700: timed with pme grid 42 100 64, coulomb cutoff 1.184: 19.7 M-cycles
step 1800: timed with pme grid 44 100 64, coulomb cutoff 1.163: 20.4 M-cycles
step 1900: timed with pme grid 44 104 72, coulomb cutoff 1.130: 22.0 M-cycles
step 2000: timed with pme grid 48 104 72, coulomb cutoff 1.115: 22.7 M-cycles
step 2100: timed with pme grid 28 64 42, coulomb cutoff 1.812: 21.8 M-cycles
step 2200: timed with pme grid 28 72 42, coulomb cutoff 1.776: 21.5 M-cycles
step 2300: timed with pme grid 32 72 44, coulomb cutoff 1.691: 22.0 M-cycles
step 2400: timed with pme grid 32 80 48, coulomb cutoff 1.554: 20.8 M-cycles
step 2500: timed with pme grid 36 84 56, coulomb cutoff 1.381: 20.2 M-cycles
step 2600: timed with pme grid 40 96 56, coulomb cutoff 1.329: 20.3 M-cycles
step 2700: timed with pme grid 40 96 60, coulomb cutoff 1.243: 21.8 M-cycles
step 2800: timed with pme grid 42 96 64, coulomb cutoff 1.208: 21.1 M-cycles
step 2900: timed with pme grid 42 100 64, coulomb cutoff 1.184: 21.7 M-cycles
step 3000: timed with pme grid 44 100 64, coulomb cutoff 1.163: 21.1 M-cycles
optimal pme grid 32 80 48, coulomb cutoff 1.554