When I create a massive atoms(>300m), there are some errors

GROMACS version:2018.8
GROMACS modification: No

When I create a massive system(atoms > 300m), the errors occur as following:
[cn3051:2371405:0:2371405] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x400200e1d100)
==== backtrace (tid:2371405) ====
0 /usr/local/ucx/lib/libucs.so.0(ucs_handle_error+0x250) [0x40001e2dc3d0]
1 /usr/local/ucx/lib/libucs.so.0(+0x26530) [0x40001e2dc530]
2 /usr/local/ucx/lib/libucs.so.0(+0x268c0) [0x40001e2dc8c0]
3 linux-vdso.so.1(__kernel_rt_sigreturn+0) [0x40001c1e65b8]
4 /thfs1/home/kanbw/2018.8-sp-opt-gdb/lib/libgromacs_mpi.so.3(_Z17nbnxn_put_on_gridP12nbnxn_searchiPA3_fiPfS3_iifPKiS2_iPiiP16nbnxn_atomdata_t+0xaa0) [0x40001c9aa198]
5 /thfs1/home/kanbw/2018.8-sp-opt-gdb/lib/libgromacs_mpi.so.3(_Z19dd_partition_systemP8_IO_FILElP9t_commreciiP7t_statePK10gmx_mtop_tPK10t_inputrecS4_PSt6vectorIN3gmx11BasicVectorIfEENSC_9AllocatorISE_NSC_23AlignedAllocationPolicyEEEEPNSC_7MDAtomsEP14gmx_localtop_tP10t_forcerecP11gmx_vsite_tP10gmx_constrP6t_nrnbP13gmx_wallcyclei+0xbb0) [0x40001c373d18]
6 /thfs1/home/kanbw/2018.8-sp-opt-gdb/bin/gmx_mpi() [0x4152b8]
7 /thfs1/home/kanbw/2018.8-sp-opt-gdb/bin/gmx_mpi() [0x428188]
8 /thfs1/home/kanbw/2018.8-sp-opt-gdb/bin/gmx_mpi() [0x417690]
9 /thfs1/home/kanbw/2018.8-sp-opt-gdb/bin/gmx_mpi() [0x417d7c]
10 /thfs1/home/kanbw/2018.8-sp-opt-gdb/lib/libgromacs_mpi.so.3(_ZN3gmx24CommandLineModuleManager3runEiPPc+0x220) [0x40001c344578]
11 /thfs1/home/kanbw/2018.8-sp-opt-gdb/bin/gmx_mpi() [0x40e1f8]
12 /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe8) [0x40001de2c090]
13 /thfs1/home/kanbw/2018.8-sp-opt-gdb/bin/gmx_mpi() [0x40e2dc]

DD cell 9 0 0: Neighboring cells do not have atoms: 169651503 173464687 173464688 173465002 173465004 169652142 173465230 173465231 173465548 173465550 169652778 173465773 173465774 173466091 173466093 169653414 173466316 173466317 173466634 173466636 169654050 173466859 173466860 173467177 173467179 169654686 173467402 173467403 173467720 173467722 169655322 173467945 173467946 173468263 173468265 169655958 173468488 173468489 173468806 173468808 169656594 173469031 173469032 173469349 173469351 169657230 173469574 173469575 173469892 173469894 169657866 173470117 173470118 173470435 173470437 169658502 173470660 173470661 173470978 173470980 169659138 173471203 173471204 173471521 173471523 169659774 173471746 173471747 173472064 173472066 169660410 173472289 173472290 173472607 173472609 169661046 173472832 173472833 173473150 173473152 169661682 173473375 173473376 173473693 173473695 169662318 173473918 173473919 173474236 173474238 169662954 173474461 173474462DD cell 9 0 0: Neighboring cells do not have atoms: 169651503 173464687 173464688 173465002 173465004 169652142 173465230 173465231 173465548 173465550 169652778 173465773 173465774 173466091 173466093 169653414 173466316 173466317 173466634 173466636 169654050 173466859 173466860 173467177 173467179 169654686 173467402 173467403 173467720 173467722 169655322 173467945 173467946 173468263 173468265 169655958 173468488 173468489 173468806 173468808 169656594 173469031 173469032 173469349 173469351 169657230 173469574 173469575 173469892 173469894 169657866 173470117 173470118 173470435 173470437 169658502 173470660 173470661 173470978 173470980 169659138 173471203 173471204 173471521 173471523 169659774 173471746 173471747 173472064 173472066 169660410 173472289 173472290 173472607 173472609 169661046 173472832 173472833 173473150 173473152 169661682 173473375 173473376 173473693 173473695 169662318 173473918 173473919 173474236 173474238 169662954 173474461 173474462…etc.
Program: gmx mdrun, version 2018.8
Source file: src/gromacs/domdec/domdec_specatomcomm.cpp (line 602)
MPI rank: 36 (out of 64)

Fatal error:
DD cell 9 0 0 could only obtain 36952 of the 81007 atoms that are connected
via constraints from the neighboring cells. This probably means your
constraint lengths are too long compared to the domain decomposition cell
size. Decrease the number of domain decomposition grid cells or lincs-order.

For more information and tips for troubleshooting, please check the GROMACS
website at Errors - Gromacs

Abort(1) on node 36 (rank 36 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 36

How can I dispose these errors and how many mpi ranks can I run this massive system(atoms > 300m) using ?

It would seem to me that with your number of atoms and number of MPI ranks you should not be hitting any limits of max int for indices, which is 2147483648.

I don’t understand your post though. You report a fatal error and a segmentation fault. Do they occur in the same run? I would expect gmx_mpi to exit normally after the fatal error.

Usually this error occurs when your system is not properly equilibrated and atoms get very high velocities.