Using double precision for summation in single precision configuration

When working with summations in the single precision configuration, rounding error can catch up fairly quickly and snowball e.g. when computing PE/KE etc. This is the same when calling MPI_Reduce and other such summing functions. Is it considered good practice to use a double in such cases and then to cast it to real at the interface, or is the single precision configuration meant to exclusively use float type everywhere?

When summing quantities that scale linearly with the number of atoms in the system, such as PE and KE, rounding errors can be significant compared with the resulting sum. In GROMACS all MPI summations of such quantities are done in double precision. However, summing of PE and KE are mostly done in single precision in GROMACS. Here we rely on summing these quantities in steps. For PE for the clusters in the pairlist of one atom cluster and for both both PE and KE then over threads and then MPI ranks. In nearly all practical cases this provides sufficient accuracy. But if you would run a million atom system on a single thread and single MPI rank, the errors might be significant.

We might want to change the summation variables to double precision now that PE and KE are often only computed every 100th step.

1 Like

Thanks for the straightforward answer.