Gmx msd - error estimation of the diffusivity

GROMACS version: 2020.3
GROMACS modification: No

According to the manual http://manual.gromacs.org/documentation/current/onlinehelp/gmx-msd.html#gmx-msd, the diffusion constant is computed through a linear regression of MSD versus lag time. To verify that I had understood it correctly, I did the linear regression using both gnuplot and statsmodels (python package), and I got the same value as printed by gmx msd.

My question concerns the error estimation. In the description of the gmx msd command it says:

An error estimate given, which is the difference of the diffusion coefficients obtained from fits over the two halves of the fit interval.

Why is this a better error estimate than using the standard error of the regression coefficient?

Thanks for your help.

I don’t know what you mean with the standard error of the regression coefficient. If there is only one fit, there is nothing to compute a standard error for.
We use the two halves, because in unconverged cases the MSD might not be linear, so then the slope might change as a function of lag time and it is good to take that into account.