Reader Comments

Post a new comment on this article

Benchmark against libroadrunner/AMICI & SBML test suite

Posted by mkoenig on 20 Oct 2023 at 08:37 GMT

Thanks for the interesting article.

I have a few additional questions:
[1] Why were no benchmarks performed against roadrunner and AMICI, both SBML based high-performance solvers which perform faster as COPASI, see for instance the supplementary information in Panchiwala H, Shah S, Planatscher H, Zakharchuk M, König M, Dräger A. The systems biology simulation core library. Bioinformatics. 2022 Jan 12;38(3):864-865. doi: 10.1093/bioinformatics/btab669. PMID: 34554191; PMCID: PMC8756180. Would it be possible to provide the benchmarks against roadrunner. Did you use the new JIT compilation in COPASI which also results in large speedups of execution?

Somogyi ET, Bouteiller JM, Glazier JA, König M, Medley JK, Swat MH, Sauro HM. libRoadRunner: a high performance SBML simulation and analysis library. Bioinformatics. 2015 Oct 15;31(20):3315-21. doi: 10.1093/bioinformatics/btv363. Epub 2015 Jun 17. PMID: 26085503; PMCID: PMC4607739.
Fröhlich F, Weindl D, Schälte Y, Pathirana D, Paszkowski Ł, Lines GT, Stapor P, Hasenauer J. AMICI: high-performance sensitivity analysis for large ordinary differential equation models. Bioinformatics. 2021 Oct 25;37(20):3676-3677. doi: 10.1093/bioinformatics/btab227. PMID: 33821950; PMCID: PMC8545331.

Besides speed an important feature is correctness of implementation and information on the subset of supported features. The SBML testsuite provides means to test the implementation and provides possibilities to see what subset of features are supported. Could you provide test results for the test suite available at https://github.com/sbmlte...

Best Matthias König

Competing interests declared: I was involved in the development of roadrunner and SBSCL.

RE: Benchmark against libroadrunner/AMICI & SBML test suite

Torkel replied to mkoenig on 08 Nov 2023 at 14:11 GMT

Dear Matthias,

It was not possible for us to benchmark all systems biology chemical reaction network simulation packages. Those we selected were influenced by a variety of factors, including popularity via citations and other statistics, being a vendor-based solution for their platform (i.e. Matlab SimBiology), and apparent development activity when we initiated the benchmark studies (2022). (At that time the documentation we found via Google and the libroadrunner.org website redirected us to web.archive.org, giving us the impression the project was no longer under active development.)

Given your comment above, and Herbert Sauro’s earlier email that stated similar points about libRoadRunner, we recently benchmarked RoadRunner on our test set. We used the Julia interface (https://github.com/SunnyX...), and also asked the RoadRunner developers for advice in setting it up. We were unable to get RoadRunner to work on our HPC, and as such, we ran the benchmarks on my local machine (and also re-ran the Catalyst benchmarks on the same machine for comparison).

Unfortunately, we were unable to reliably benchmark RoadRunner on the two larger models (fceri_gamma2 and BCR), experiencing issues that included failures to load the models and very long run times. Here we provide the results of the benchmarks for those cases we could successfully run, where we in each case re-ran the benchmarks to confirm they were reproducible:

The benchmarks are available via the following link: https://gist.github.com/T...

The code for performing these benchmarks can be found as the “RoadRunnerBenchmarks” branch of the Github repository for this paper. For Catalyst, we used the most performant options (as found in Figure 3 of the paper, and described in detail in Table 3). For RoadRunner, we used the default options, as no others were either described in the documentation (https://sunnyxu.github.io...), or in the scripts and correspondence we received when setting these up.

We are content with the extent of these additional benchmarks, and plan to focus on other development priorities going forward. However, if you would like to perform additional studies we are happy to assist with any questions on using Catalyst.

How we benchmark all tools, including COPASI, is described in the paper, with the scripts used available in the online repo. We also specify each tool’s version (the latest at the times the benchmarks were carried out). The paper describes the various benchmark options and configurations we examined.

We agree that correctness of implementations is important. The link you provide focuses on tests regarding SBML models, a file format for systems biology models. Catalyst SBML support is provided via SBMLToolkit.jl, a separate package from Catalyst, with development led by others. While we have previously added features to Catalyst to enable better SBMLToolkit coverage of the SBML standard, any SBML-related benchmarks and coverage matters would be better discussed with the SBMLToolkit developers via an issue on their Github repo. We are happy to add features to Catalyst to better enable SBML functionality as appropriate, and would be enthusiastic to work with anyone affiliated with the official SBML libraries if there is interest in integrating with Catalyst.

Catalyst functionality is validated by our own set of unit and CI tests, and tests of the underlying solver libraries that Catalyst leverages. These are publicly available for inspection, and run via CI before any code update is merged. We are happy to add additional tests should there be Catalyst components that are not adequately covered by our current CI, please feel free to open a Github issue with suggestions!

Best wishes,
Torkel Loman

No competing interests declared.