Segmentation fault while using mdrun in gromacs patched with plumed

GROMACS version:2021.6-plumed-2.7.5
GROMACS modification: Yes
I am running
mpirun -np 35 gmx_mpi mdrun -plumed plumed.dat -deffnm topolmd -multidir ./steprest/MD_[0123456] -replex 1000 -hrex -dlb no
however, the process broke down with segmentation fault below:

[localhost:2845343] *** Process received signal ***
[localhost:2845343] Signal: Segmentation fault (11)
[localhost:2845343] Signal code: Address not mapped (1)
[localhost:2845343] Failing at address: (nil)
[localhost:2845387] *** Process received signal ***
[localhost:2845387] Signal: Segmentation fault (11)
[localhost:2845387] Signal code: Address not mapped (1)
[localhost:2845387] Failing at address: (nil)
[localhost:2845351] *** Process received signal ***
[localhost:2845351] Signal: Segmentation fault (11)
[localhost:2845351] Signal code: Address not mapped (1)
[localhost:2845351] Failing at address: (nil)
[localhost:2845351] [localhost:2845370] *** Process received signal ***
[localhost:2845370] Signal: Segmentation fault (11)
[localhost:2845370] Signal code: Address not mapped (1)
[localhost:2845370] Failing at address: (nil)
[localhost:2845370] [ 0] [localhost:2845319] *** Process received signal ***
[localhost:2845319] Signal: Segmentation fault (11)
[localhost:2845319] Signal code: Address not mapped (1)
[localhost:2845319] Failing at address: (nil)
[localhost:2845305] *** Process received signal ***
[localhost:2845305] Signal: Segmentation fault (11)
[localhost:2845305] Signal code: Address not mapped (1)
[localhost:2845305] Failing at address: (nil)
[localhost:2845307] *** Process received signal ***
[localhost:2845307] Signal: Segmentation fault (11)
[localhost:2845307] Signal code: Address not mapped (1)
[localhost:2845307] Failing at address: (nil)
[localhost:2845303] *** Process received signal ***
[localhost:2845303] Signal: Segmentation fault (11)
[localhost:2845303] Signal code: Address not mapped (1)
[localhost:2845303] Failing at address: (nil)
[localhost:2845303] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7fca6074acf0]
[localhost:2845303] *** End of error message ***
[localhost:2845377] *** Process received signal ***
[localhost:2845377] Signal: Segmentation fault (11)
[localhost:2845377] Signal code: Address not mapped (1)
[localhost:2845377] Failing at address: (nil)
[localhost:2845383] *** Process received signal ***
[localhost:2845383] Signal: Segmentation fault (11)
[localhost:2845383] Signal code: Address not mapped (1)
[localhost:2845383] Failing at address: (nil)
[localhost:2845383] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f1ecb0a4cf0]
[localhost:2845383] *** End of error message ***
[localhost:2845310] *** Process received signal ***
[localhost:2845310] Signal: Segmentation fault (11)
[localhost:2845310] Signal code: Address not mapped (1)
[localhost:2845310] Failing at address: (nil)
[localhost:2845314] *** Process received signal ***
[localhost:2845314] Signal: Segmentation fault (11)
[localhost:2845314] Signal code: Address not mapped (1)
[localhost:2845314] Failing at address: (nil)
[localhost:2845314] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7fdbf309ccf0]
[localhost:2845314] *** End of error message ***
[localhost:2845353] *** Process received signal ***
[localhost:2845353] Signal: Segmentation fault (11)
[localhost:2845353] Signal code: Address not mapped (1)
[localhost:2845353] Failing at address: (nil)
[localhost:2845353] [ 0] [localhost:2845302] *** Process received signal ***
[localhost:2845302] Signal: Segmentation fault (11)
[localhost:2845302] Signal code: Address not mapped (1)
[localhost:2845302] Failing at address: (nil)
[localhost:2845302] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7fa82cd51cf0]
[localhost:2845302] *** End of error message ***
[localhost:2845300] *** Process received signal ***
[localhost:2845300] Signal: Segmentation fault (11)
[localhost:2845300] Signal code: Address not mapped (1)
[localhost:2845300] Failing at address: (nil)
[localhost:2845300] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f7151343cf0]
[localhost:2845300] *** End of error message ***
[localhost:2845312] *** Process received signal ***
[localhost:2845312] Signal: Segmentation fault (11)
[localhost:2845312] Signal code: Address not mapped (1)
[localhost:2845312] Failing at address: (nil)
[localhost:2845312] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7fdd0fedccf0]
[localhost:2845312] *** End of error message ***
[localhost:2845334] *** Process received signal ***
[localhost:2845334] Signal: Segmentation fault (11)
[localhost:2845334] Signal code: Address not mapped (1)
[localhost:2845334] Failing at address: (nil)
[localhost:2845334] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7fa99db15cf0]
[localhost:2845334] *** End of error message ***
[localhost:2845327] *** Process received signal ***
[localhost:2845327] Signal: Segmentation fault (11)
[localhost:2845327] Signal code: Address not mapped (1)
[localhost:2845327] Failing at address: (nil)
[localhost:2845327] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f62cc6cacf0]
[localhost:2845327] *** End of error message ***
[localhost:2845372] *** Process received signal ***
[localhost:2845372] Signal: Segmentation fault (11)
[localhost:2845372] Signal code: Address not mapped (1)
[localhost:2845372] Failing at address: (nil)
[localhost:2845372] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f564ff42cf0]
[localhost:2845372] *** End of error message ***
[localhost:2845308] *** Process received signal ***
[localhost:2845308] Signal: Segmentation fault (11)
[localhost:2845308] Signal code: Address not mapped (1)
[localhost:2845308] Failing at address: (nil)
[localhost:2845308] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7fb97726ccf0]
[localhost:2845308] *** End of error message ***
[localhost:2845360] *** Process received signal ***
[localhost:2845360] Signal: Segmentation fault (11)
[localhost:2845360] Signal code: Address not mapped (1)
[localhost:2845360] Failing at address: (nil)
[localhost:2845360] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f735694ccf0]
[localhost:2845360] *** End of error message ***
[localhost:2845343] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f84b5a04cf0]
[localhost:2845343] *** End of error message ***
[localhost:2845387] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7fb88eececf0]
[localhost:2845387] *** End of error message ***
[ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f6afd839cf0]
[localhost:2845351] *** End of error message ***
/lib64/libpthread.so.0(+0x12cf0)[0x7faffe9adcf0]
[localhost:2845370] *** End of error message ***
[localhost:2845319] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f173e6d2cf0]
[localhost:2845319] *** End of error message ***
[localhost:2845305] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f3d69c44cf0]
[localhost:2845305] *** End of error message ***
[localhost:2845307] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7fc8aa8b2cf0]
[localhost:2845307] *** End of error message ***
[localhost:2845377] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7ff254529cf0]
[localhost:2845377] *** End of error message ***
[localhost:2845310] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x7f414ee2dcf0]
[localhost:2845310] *** End of error message ***
/lib64/libpthread.so.0(+0x12cf0)[0x7f3e50ed6cf0]
[localhost:2845353] *** End of error message ***
mpirun noticed that process rank 5 with PID 0 on node localhost exited on signal 11 (Segmentation fault).

more, the files like .xtc, .edr were output without content.
one of the MD .log files:
:-) GROMACS - gmx mdrun, 2021.6-plumed-2.7.5 (-:

                        GROMACS is written by:
 Andrey Alekseenko              Emile Apol              Rossen Apostolov     
     Paul Bauer           Herman J.C. Berendsen           Par Bjelkmar       
   Christian Blau           Viacheslav Bolnykh             Kevin Boyd        
 Aldert van Buuren           Rudi van Drunen             Anton Feenstra      
Gilles Gouaillardet             Alan Gray               Gerrit Groenhof      
   Anca Hamuraru            Vincent Hindriksen          M. Eric Irrgang      
  Aleksei Iupinov           Christoph Junghans             Joe Jordan        
Dimitrios Karkoulis            Peter Kasson                Jiri Kraus        
  Carsten Kutzner              Per Larsson              Justin A. Lemkul     
   Viveca Lindahl            Magnus Lundborg             Erik Marklund       
    Pascal Merz             Pieter Meulenhoff            Teemu Murtola       
    Szilard Pall               Sander Pronk              Roland Schulz       
   Michael Shirts            Alexey Shvetsov             Alfons Sijbers      
   Peter Tieleman              Jon Vincent              Teemu Virolainen     
 Christian Wennberg            Maarten Wolf              Artem Zhmurov       
                       and the project leaders:
    Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2022, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS: gmx mdrun, version 2021.6-plumed-2.7.5
Executable: /home/Liws2021/gmx202106_mpi_plumed/bin/gmx_mpi
Data prefix: /home/Liws2021/gmx202106_mpi_plumed
Working dir: /data/Liws2021/AT8/rest/steprest/MD_5
Process ID: 2848572
Command line:
gmx_mpi mdrun -plumed plumed.dat -deffnm topolmd -multidir ./steprest/MD_0 ./steprest/MD_1 ./steprest/MD_2 ./steprest/MD_3 ./steprest/MD_4 ./steprest/MD_5 ./steprest/MD_6 -replex 1000 -hrex -dlb no

GROMACS version: 2021.6-plumed-2.7.5
Precision: mixed
Memory model: 64 bit
MPI library: MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: disabled
SIMD instructions: AVX_512
FFT library: fftw-3.3.10-sse2-avx-avx2-avx2_128
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 8.5.0
C compiler flags: -mavx512f -mfma -pthread -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 8.5.0
C++ compiler flags: -mavx512f -mfma -pthread -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG

Running on 1 node with total 40 cores, 80 logical cores
Hardware detected on host localhost.localdomain (the node of MPI rank 10):
CPU info:
Vendor: Intel
Brand: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz
Family: 6 Model: 106 Stepping: 6
Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl avx512secondFMA clfsh cmov cx8 cx16 f16c fma htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sha sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
Number of AVX-512 FMA units: 2
Hardware topology: Basic
Sockets, cores, and logical processors:
Socket 0: [ 0 40] [ 1 41] [ 2 42] [ 3 43] [ 4 44] [ 5 45] [ 6 46] [ 7 47] [ 8 48] [ 9 49] [ 10 50] [ 11 51] [ 12 52] [ 13 53] [ 14 54] [ 15 55] [ 16 56] [ 17 57] [ 18 58] [ 19 59]
Socket 1: [ 20 60] [ 21 61] [ 22 62] [ 23 63] [ 24 64] [ 25 65] [ 26 66] [ 27 67] [ 28 68] [ 29 69] [ 30 70] [ 31 71] [ 32 72] [ 33 73] [ 34 74] [ 35 75] [ 36 76] [ 37 77] [ 38 78] [ 39 79]

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
M. J. Abraham, T. Murtola, R. Schulz, S. P谩ll, J. C. Smith, B. Hess, E.
Lindahl
GROMACS: High performance molecular simulations through multi-level
parallelism from laptops to supercomputers
SoftwareX 1 (2015) pp. 19-25
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. P谩ll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl
Tackling Exascale Software Challenges in Molecular Dynamics Simulations with
GROMACS
In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Pronk, S. P谩ll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.
Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl
GROMACS 4.5: a high-throughput and highly parallel open source molecular
simulation toolkit
Bioinformatics 29 (2013) pp. 845-54
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- — Thank You — -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- — Thank You — -------- --------

Input Parameters:
integrator = md
tinit = 0
dt = 0.002
nsteps = 50000
init-step = 0
simulation-part = 1
mts = false
comm-mode = Linear
nstcomm = 100
bd-fric = 0
ld-seed = -2015371314
emtol = 10
emstep = 0.01
niter = 20
fcstep = 0
nstcgsteep = 1000
nbfgscorr = 10
rtpi = 0.05
nstxout = 0
nstvout = 0
nstfout = 0
nstlog = 1000
nstcalcenergy = 100
nstenergy = 1000
nstxout-compressed = 1000
compressed-x-precision = 1000
cutoff-scheme = Verlet
nstlist = 10
pbc = xyz
periodic-molecules = false
verlet-buffer-tolerance = 0.005
rlist = 1
coulombtype = PME
coulomb-modifier = Potential-shift
rcoulomb-switch = 0
rcoulomb = 1
epsilon-r = 1
epsilon-rf = inf
vdw-type = Cut-off
vdw-modifier = Potential-shift
rvdw-switch = 0
rvdw = 1
DispCorr = EnerPres
table-extension = 1
fourierspacing = 0.16
fourier-nx = 40
fourier-ny = 40
fourier-nz = 36
pme-order = 4
ewald-rtol = 1e-05
ewald-rtol-lj = 0.001
lj-pme-comb-rule = Geometric
ewald-geometry = 0
epsilon-surface = 0
tcoupl = V-rescale
nsttcouple = 10
nh-chain-length = 0
print-nose-hoover-chain-variables = false
pcoupl = No
pcoupltype = Isotropic
nstpcouple = -1
tau-p = 1
compressibility (3x3):
compressibility[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compressibility[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p (3x3):
ref-p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
refcoord-scaling = No
posres-com (3):
posres-com[0]= 0.00000e+00
posres-com[1]= 0.00000e+00
posres-com[2]= 0.00000e+00
posres-comB (3):
posres-comB[0]= 0.00000e+00
posres-comB[1]= 0.00000e+00
posres-comB[2]= 0.00000e+00
QMMM = false
qm-opts:
ngQM = 0
constraint-algorithm = Lincs
continuation = true
Shake-SOR = false
shake-tol = 0.0001
lincs-order = 4
lincs-iter = 1
lincs-warnangle = 30
nwall = 0
wall-type = 9-3
wall-r-linpot = -1
wall-atomtype[0] = -1
wall-atomtype[1] = -1
wall-density[0] = 0
wall-density[1] = 0
wall-ewald-zfac = 3
pull = false
awh = false
rotation = false
interactiveMD = false
disre = No
disre-weighting = Conservative
disre-mixed = false
dr-fc = 1000
dr-tau = 0
nstdisreout = 100
orire-fc = 0
orire-tau = 0
nstorireout = 100
free-energy = no
cos-acceleration = 0
deform (3x3):
deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
simulated-tempering = false
swapcoords = no
userint1 = 0
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
applied-forces:
electric-field:
x:
E0 = 0
omega = 0
t0 = 0
sigma = 0
y:
E0 = 0
omega = 0
t0 = 0
sigma = 0
z:
E0 = 0
omega = 0
t0 = 0
sigma = 0
density-guided-simulation:
active = false
group = protein
similarity-measure = inner-product
atom-spreading-weight = unity
force-constant = 1e+09
gaussian-transform-spreading-width = 0.2
gaussian-transform-spreading-range-in-multiples-of-width = 4
reference-density-filename = reference.mrc
nst = 1
normalize-densities = true
adaptive-force-scaling = false
adaptive-force-scaling-time-constant = 4
shift-vector =
transformation-matrix =
grpopts:
nrdf: 4920.62 34287.4
ref-t: 300 300
tau-t: 0.1 0.1
annealing: No No
annealing-npoints: 0 0
acc: 0 0 0
nfreeze: N N N
energygrp-flags[ 0]: 0

Changing nstlist from 10 to 50, rlist from 1 to 1.102

Initializing Domain Decomposition on 2 ranks
Dynamic load balancing: off
Using update groups, nr 6720, average size 2.8 atoms, max. radius 0.104 nm
Minimum cell size due to atom displacement: 0.398 nm
Initial maximum distances in bonded interactions:
two-body bonded interactions: 0.443 nm, LJ-14, atoms 1156 1405
multi-body bonded interactions: 0.443 nm, Proper Dih., atoms 1156 1405
Minimum cell size due to bonded interactions: 0.487 nm
Using 0 separate PME ranks, as there are too few total
ranks for efficient splitting
Optimizing the DD grid for 2 cells with a minimum initial size of 0.487 nm
The maximum allowed number of cells is: X 11 Y 12 Z 11
Domain decomposition grid 1 x 2 x 1, separate PME ranks 0
PME domain decomposition: 1 x 2 x 1
Domain decomposition rank 0, coordinates 0 0 0

The initial number of communication pulses is: Y 1
The initial domain decomposition cell size is: Y 3.12 nm

The maximum allowed distance for atom groups involved in interactions is:
non-bonded interactions 1.310 nm
two-body bonded interactions (-rdd) 1.310 nm
multi-body bonded interactions (-rdd) 1.310 nm

This is simulation 5 out of 7 running as a composite GROMACS
multi-simulation job. Setup for this simulation:

Using 2 MPI processes

Non-default thread affinity set, disabling internal thread affinity

Using 5 OpenMP threads per MPI process

System total charge: 0.000
Will do PME sum in reciprocal space for electrostatic interactions.

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- — Thank You — -------- --------

Using a Gaussian width (1/beta) of 0.320163 nm for Ewald
Potential shift: LJ r^-12: -1.000e+00 r^-6: -1.000e+00, Ewald -1.000e-05
Initialized non-bonded Coulomb Ewald tables, spacing: 9.33e-04 size: 1073

Generated table with 1051 data points for 1-4 COUL.
Tabscale = 500 points/nm
Generated table with 1051 data points for 1-4 LJ6.
Tabscale = 500 points/nm
Generated table with 1051 data points for 1-4 LJ12.
Tabscale = 500 points/nm
Long Range LJ corr.: 3.2623e-04

Using SIMD 4x8 nonbonded short-range kernels

Using a dual 4x8 pair-list setup updated with dynamic pruning:
outer list: updated every 50 steps, buffer 0.102 nm, rlist 1.102 nm
inner list: updated every 13 steps, buffer 0.003 nm, rlist 1.003 nm
At tolerance 0.005 kJ/mol/ps per atom, equivalent classical 1x1 list would be:
outer list: updated every 50 steps, buffer 0.229 nm, rlist 1.229 nm
inner list: updated every 13 steps, buffer 0.051 nm, rlist 1.051 nm

Using Lorentz-Berthelot Lennard-Jones combination rule

Initializing LINear Constraint Solver

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
LINCS: A Linear Constraint Solver for molecular simulations
J. Comp. Chem. 18 (1997) pp. 1463-1472
-------- -------- — Thank You — -------- --------

The number of constraints is 959

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
-------- -------- — Thank You — -------- --------

Linking all bonded interactions to atoms

Intra-simulation communication will occur every 10 steps.

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
G. Bussi, D. Donadio and M. Parrinello
Canonical sampling through velocity rescaling
J. Chem. Phys. 126 (2007) pp. 014101
-------- -------- — Thank You — -------- --------

There are: 19101 Atoms
Atom distribution over 2 domains: av 9550 stddev 98 min 9549 max 9552

Initializing Replica Exchange
Repl There are 7 replicas:
Multi-checking the number of atoms … OK
Multi-checking the integrator … OK
Multi-checking init_step+nsteps … OK
Multi-checking first exchange step: init_step/-replex … OK
Multi-checking the temperature coupling … OK
Multi-checking the number of temperature coupling groups … OK
Multi-checking the pressure coupling … OK
Multi-checking free energy … OK
Multi-checking number of lambda states … OK

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
Y. Sugita, Y. Okamoto
Replica-exchange molecular dynamics method for protein folding
Chem. Phys. Lett. 314 (1999) pp. 141-151
-------- -------- — Thank You — -------- --------

Though I know the fault is probably related to plumed or openmpi, how can I deal with it.