The tutorial of gmxapi with cpp version

GROMACS version:2022
Two questions about gmxapi confused me for more than a week.

Q1. I installed GROMACS following the installation guide successfully. I am sure that I have doxygen and set the -DGMXAPI=ON. However, I cannot find any gmxapi documentation in the build direction, neither docs/api-user or docs/api-user or docs/api-dev. So I didn’t find the gmxapi-cppdocs which is what I want. I want to know how I can get the gmxapi-cppdocs now.

Q2. As written on the installation guide website, gmxapi is composed of c++ APIs and python APIs.
However, I only find tutorials for python on the Welcome to the GROMACS documentation! — GROMACS 2022 documentation website and beneficial fs-peptide.py examples on Github.
I wonder where to find similar fs-peptide examples for CPP, not python?
If there is no example like fs-peptide, where can I find tutorials like gmxapi Python package?

As you can see, I am only concerned about how to use gmxapi in CPP.
Thus, if Q2 can be solved, Q1 can be ignored.

Hi @haidee. Thank you for the questions.

The short answer is that most of the functionality demonstrated in that example is only available at the Python level.

There is a strong commitment of support for the gmxapi Python interface as documented. (The syntax and software structure are pretty stable now.)

The C++ interfaces, however, have a of issues that need to be resolved before we draw more attention to them. Among the open issues: we have not decided how to define what is in the API and how best to document it. (footnote)

Moreover, the public C++ API is expected to change dramatically over the next year or two. It is not clear yet what the scope of supported use cases will be. It is likely that much of the higher-level functionality demonstrated in the fs-peptide tutorial will not be ported to C++, unless we identify a third-party workflow manager or execution manager with a C++ API that is attractive to integrate with.

Currently, the primary use cases for the gmxapi C++ interfaces are to allow Python bindings to be written, and to support pluggable MD extension code.

Can you comment a bit on the sorts of functionality that you would like to access through a C++ API?

For the moment, you can build the gmxapi C++ documentation locally with the gmxapi-cppdocs, but this documentation is not integrated with the other documentation, and is not part of the documentation GROMACS publishes online. You may find it more useful to look at the source code for the gmxapi Python package or the sample MD plugin code for examples of how to do what is possible with the current C++ API. The C++ unit tests should also serve as documentation-by-example for many of the most basic aspects.

Please also consider joining the developer mailing list and biweekly video conference.

Footnote: See, for instance

Thank you for your patient and clear reply.
Since python API is recommended, I will use it in my project.

Actually, I want to do em, nvt, npt, and md simulations in deep learning neural network. For example, md will be like an operator or layer in a neural network. I will use a deep learning framework(TensorFlow, PyTorch, etc) before and after the md process. So I think gmxapi is suiting for me to add md in a neural network in the deep learning framework.

The reason why I want to know about the c++ API is that I think c++ has more performance than python. Additionally, c++ is easy to insert into my project now.

By the way, if there is a lib that only contains the algorithm part works better for me. The source code about the distribution and devices in GMXAPI may need to be removed in the future in my scenario.

It sounds like your use case is exactly of the sort that gmxapi is intended to address.

Most such frameworks have performant and stable Python interfaces. Are there any specific C++ APIs you need to interact with, that I could assess for compatibility considerations?

Where possible, we just use Python as “glue” for C/C++ code. I think you will find that the Python interface does not affect simulation performance. As library I/O allows for optimizations (data transformations/exchange, short-circuiting of unnecessary I/O), optimizations specific to API use cases will probably be available in a stable form through the Python interface before the C++ API can be stabilized.

Two performance issues we hope to tackle this year are simulation input/output API handles and array data interfaces. Currently, the C++ library does not provide handling of input/output other than through the filesystem.

Buffer/reference access to array data and SimulationInput / SimulationOutput handles should be stabilized in Python for the next major release, and will continue to be optimized in future releases. (Stable C++ interfaces may take longer.) The exception is that some data from live simulations should become more accessible through the MD C++ extension interface as the library-internal MDModules framework evolves, so in some cases you will be able to get optimal access to, say, particle positions, by attaching a small C++ module. (Historically, we use Python to bind such code at run time as the simulation launches, then Python does not participate again until the simulation is over or it is explicitly called by the extension code.)

By the way, if there is a lib that only contains the algorithm part works better for me.

Unfortunately, the algorithms code is historically very intertwined. Efforts to separate them over the last few years have been only partly successful. For the foreseeable future, you will have to direct the library with a comprehensive description of the work to perform through some version of MDP / TPR / SimulationInput. It is unlikely that the dispatching code will be more directly exposed in the immediate future.

The source code about the distribution and devices in GMXAPI may need to be removed in the future in my scenario.

I’m hopeful that we will be able to better compartmentalize this soon. Performance on the primary code paths is the highest concern for GROMACS development, though, so we have to be careful as we disentangle the dispatching of computational work from the detection and assignment of computing resources. See, for instance, these near term resource management issues

Depending on the molecular systems that you want to study, you may find the C+±only NBLib library useful. It is being architected from the ground up instead of from the top down, so it does not include all libgromacs functionality, and reimplements other aspects as necessary to support its interface design or performance considerations. See Guide to Writing MD Programs — GROMACS 2022.1 documentation

Thank you very much for your patience. Your reply is very helpful to me.
I am very interested to consult you on two aspects.

  1. Why C++?
    Although most deep learning frameworks use Python to express the structure of neural networks, use C++ to create operator.
    With the application of deep learning in molecular dynamics simulations, I wanted to mix Gromacs with neural networks. In other words, Gromacs are interspersed with neural networks.
    So I want to treat em, nvt, npt, and md as operators, but not a neural network.
    Based on the above two points, C++ API is more suitable for me to create custom operators.

  2. NBLib
    I investigated NBLib before. However, I found that NBLib can only simulate md(molecular dynamic) from this website. It cannot simulate nvt and npt. So if I chose to use NBLib, is there a way to add nvt,npt, and em?

C++ API is more suitable for me to create custom operators

This use case was considered for gmxapi, but has not yet been a top priority. Thank you for reminding us about the tensorflow op interfaces. There have been gmxapi prototypes that were nearly API-compatible with TensorFlow. I think it would be good to keep this in mind as we update the gmxapi C++ patterns.

The underlying GROMACS library does not compartmentalize integrators for different ensembles as separate objects or code paths, but we should keep such an interface style in mind, as well. Currently, all of the details related to different methods are bundled together, and internal code dispatching is a bit tangled. At the very least, though, we could probably add some utility functions to inspect the SimulationInput to help discover the type of simulation intended, anticipate the internal dispatching, or determine the appropriateness of the simulation input for a particular simulation type. This sort of development would probably fit well for some planned work on modernizing grompp in terms of API-accessible tools.

I think near-term development will be focusing on normalizing and improving existing abstractions, though. It will probably be at least a year or two before we have something like TrajectoryPropagator makeSimulator(SimulationInput<MethodT> input) that you could wrap trivially as the kernel for an op. But additional contributing developers could significantly impact the timeline and supported use cases. :-)

Along the way, though, I think we’ll be able to start improving the visibility and usability of documentation for future and current public API features available with a GROMACS installation. This will actually be part of the agenda at the biweekly developer teleconference next week (1 July). (You are welcome to attend. Join the developer list to receive an invitation email automatically on the day of the meeting.)

So if I chose to use NBLib, is there a way to add nvt,npt, and em?

There is not much higher-level code for NB-Lib at this time, but you could write your own integrator / trajectory propagator based on the simple integrator at api/nblib/integrator.h · main · GROMACS / GROMACS · GitLab and api/nblib/integrator.cpp · main · GROMACS / GROMACS · GitLab

Thank you so much for your patient reply.
For the moment, I will use Python gmxapi to build TensorFlow operators.
I’m glad to join the developer list and contribute to Gromacs if possible.

Thanks again.