Peer Review History

Original SubmissionAugust 6, 2024
Decision Letter - Anders Wallqvist, Editor, Marc R Birtwistle, Editor

Dear Prof. Bhalla,

Thank you very much for submitting your manuscript "Hierarchical optimization of biochemical networks" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Anders Wallqvist

Academic Editor

PLOS Computational Biology

Marc Birtwistle

Section Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: In the manuscript "Hierarchical optimization of biochemical networks", Viswan et al. describe an approach for the hierarchical decomposition of estimation problem. Instead of solving the full optimisation problem directly, the authors propose to construct a sequence of optimisation problems based on the topology of the biochemical process.

To tackle increasingly large models of biological processes, parameter estimation methods have to be efficient and scalable. To achieve this, a variety a novel concepts have been proposed, including hierarchical optimisation strategies for observation and noise parameters. Here, the authors propose to nested hierarchical optimization to improve parameter estimation further. Unfortunately, the propose is not compare to state of the art method and a discussion of related approach is missing. In my opinion, the extended case with feedback appears similar to the dependent input approach by van Riel and Sontag (DOI: 10.1049/ip-syb:20050076). Furthermore, the paper would profit from an assessment of limitation and applicability, and a comparison further comparison with state-of-the-art methods (see comments below).

Major:

1) The manuscript should discuss related approaches, e.g. the method by Kotte & Heinemann (https://doi.org/10.1093/bioinformatics/btp004) and van Riel & Sontag (DOI: 10.1049/ip-syb:20050076) which also uses a decomposition of the overall problems. As these methods have not been widely used, it would be important to discuss why the authors believe that this will be different for their approach.

2) As there are already hierarchical approaches published for ODE models of biochemical processes (e.g. reference 25 which you include), the author should in my opinion choose a much more specific title. Furthermore, the reference 25 and its extensions in several directions (see for a summary: https://mediatum.ub.tum.de/doc/1625172/a5r8m48zdiyp4qhwmhuqrylg2.PhDThesis_YannikSchaelte_20220725.pdf) should in my opinion be mentioned in the introduction.

3) L. 18-21: "In particular, parameter optimization methods have been implemented in a fragmented manner [9–17]. This is in part due to the very wide diversity of experimental inputs used to constrain such models, but also due to the inherent contradictions and incompleteness of the parameter constraints." - It is unclear for me in which regarding the authors considered the implementation of optimisation methods to be fragmented. There are various comprehensive optimisation packages with a broad support, and even in systems biology there are toolboxes such as COPASI, D2D and PyPESTO supporting dozens of optimisers which allow for constraints.

4) L. 87-88: "In the HOSS calculations we perform two levels of scoring. First, for each experiment for which the sub-model is tested" - At this point submodes have not ben formally introduced. I would recommend to change the order and to start with a comprehensive model formulation and a definition of submodes, before specifying the objective function.

5) L. 87-98: In my opinion the use of statistical interpretable objective functions is highly beneficial. For the objective function used by the authors for an individual dataset (Eq. 2), corresponds to the case with additive normally distributed measurement noise with mean zero and standard deviation equal to the maximum of the observed output. This on its own might be questions, but the interpretation of the sum become even more problematic (Eq. 3).

It would be interesting to know why the authors did not chose an objective functions which allows for statistical meaningful uncertainty quantification.

6) L. 103-105: "we use multistart optimization, by launching local search procedures from randomly chosen starting points generated uniformly in logarithmic scale" - It has been shown that using log-transformed parameters for optimization (not only initial sampling) improve performance tremendously (https://doi.org/10.1093/bioinformatics/btz020 and your reference [26]). It would be interesting to see what can be gained here.

7) Eq. 6: The readability of Eq. 6 could in my opinion be improved. I would recommend to stick to standard terminology and to defined the optimal points directly, e.g. p_{K-1}^* = \\arg \\min_{p_{K-1}} {...}, and to use "subject to" to refer to the conditions in the next level.

8) L. 154-156: "1) if a species is in the subset I, then all the reactions consuming or producing this species are in the corresponding reaction subset J, namely if i \\in I then j \\in J whenever S_{ij} \\neq 0.$ - The statement "if i \\in I then j \\in J whenever S_{ij} \\neq 0" implies in my opinion that all species in I are influences by all reactions in J. This is not what the first part of the sentence states and seems to be awfully restrictive. Please check.

9) Conceptually, the nested decomposition results smaller subproblems. Yet, the issue is that the inference on the small module does only consider the data which can be directly mapped to the module. Consider the reaction network R1: A -> B with rate k1*A, R2: B -> C with rate k2*B, and A and B being measured. The method would infer a small module mit A and R1 only from data in A, but clearly measurements of B provide additional information, in particular if they have a higher resolution.

It would be interesting to assess how much information os lost.

10) The formulation of the nested decomposition depends on the existence of a directionality in the graph (as shown in Figure 1). I assume that many models cannot be decomposed. Did the authors show for which fraction of models a nested decomposition is possible? The could be easily done for the Biomodels database and would -- if the fraction is high and the models are substantial is nice -- increase the impact of the contribution. Even better would be the evaluation of the PEtab benchmark collection taking also the data mapping into account.

11) L. 333-334: "For the purposes of this report, we model two signalling pathways in two formalisms each (Figure 4 A-D)." - I would have preferred if already published models were used. The formulation of own models and datasets raises the questions if they are tailor-made for the proposed approach.

12) L. 348-349: "the number of experiments pertaining to each pathway is limited, and

considerably below the number of parameters" - In my intuition this should be a very problematic setup for the proposed method. If a submodel is non-identifiable based on the corresponding data, parameters might be chosen which do not allow to achieve good estimates in subsequent iteration. I would appreciate if the authors could comment on this to complement the section of degeneracy.

13) L. 386-387: "As a reference, we first ran the HOSS pipeline using flat (non-hierarchical) optimization on the models, employing a number of standard optimization methods in the scipy.minimize library (Figure 7 A)." - For a meaningful comparison, state of the art methods which are tailored to the problem class should ideally be used. For ODE models, these approaches are for instance implemented in D2D, PEtab.jl and pyPESTO. In particular, for gradient based optimisation, sensitivity equation based formulation should be used.

Minor:

L. 21-28: I see the problem mentioned here, but I do not see how this supports / ore is even connected to the previous statements in the paragraph or the contributions of the paper in general.

Indeed, the authors seem to fall in the same trap (see line 345-348).

L. 32-34: "The current paper focuses on standardizing the calibration and optimization stages of model development, given a large but incomplete set of experimental data." - In my opinion there is nothing like a "complete dataset". I guess the authors want to refer here to having only partial observation, but this should be clarified. Furthermore, it would be interesting to know how the authors see the work being related to the establishment of the PEtab format (https://doi.org/10.1371/journal.pcbi.1008646).

L. 60-61: "We illustrate its use on an extant database of over 100 experiment definitions" - The term "experiment definitions" is not well define and should be clarified to avoid the impression that the dataset is much larger than for other models.

L. 66: "parametric optimization problems" -> "parameter optimization problems"

L. 101: "S is a space of constraints" - S does not seem to describe the constraints but the set of admissible parameter vectors. Furthermore, "S" is used for multipel different thing. I would suggest to chose here a different symbol to avoid confusion.

L. 111-112: "We refer to this procedure as parameter scrambling. Despite its simplicity, multistart optimization with logarithmic sampling has proven to be effective in benchmarks of biochemical pathways [26]." - The reference appears inappropriate as [26] optimises xi = log(p) (using p = exp(xi) in the model) and does not only sample the starting points differently.

L. 150-151: "Some species forming a subset BS ⊂ S are buffered, and their concentrations are kept constant." - This statement is not clear for me. Are the authors referring to conserved quantitates? If so, "buffering" might not be the right term.

L. 172-174: "A pair of species (i, j) ∈ E defines an edge from i to j if and only if there is a reaction that consumes or produces the species j, and its rate depends on the concentration of the species i." - Please mention here or in the sentence before that you are considering direct graphs?

L. 378: "Black-box, non-gradient optimization methods work well for flat optimization" - I'm puzzled by this observation. In our experience in particular for problems with relative flat objective functions, gradient-based methods are fare superior. Yet, this only holds of gradients are evaluated accurately, e.g. using forward sensitivity analysis. The dependence of results on such choices could be discussed.

L. 426: "Multistart methods yield lower optimization cost: initScram method" - I recommend to avoid the term "optimization costs". I would used standard terms such as "objective function value" to avoid the confusion with the "computational cost".

Reviewer #2: This is a well written paper addressing an important problem - optimization of signaling pathway models. The introduction nicely reviews the current state of the art, and explains why the problem is critical.

My main comments relate to definition of the hierarchies, autonomous pairs and SCC. Specifically:

Lines 172-173: sounds as if i is causal to j, but then in lines 174-175 you define causality with an example of j causal to i. I think it would be less confusing to use the same causality in both sentences. From the definition of SCC on lines 178-179, I would label A, D and E in Fig 1 as SCC. However, this definition is different than you give in Figure 1. In caption to Figure 1, it seems to me that the definition of SCC doesn't apply to b, which is only connected to c and f, but not a, d or e. So, either you need to change this sentence or remove b from being a scc. Possibly this applies to some other components.

Also, can you label autonomous pairs in this graph? The definition of J1 (line 191) is confusing. How can it include reactions _producing_ species from I1 if I2 has no incoming connections?

Fig 2 and Fig 6: I thought I understood how levels were defined, but these figures calls that into question. How is the pink block with RGR, Ras_GDP of rank 0 since it has an input. In Fig 6, how can a block be of multiple levels? I thought level 1 meant no inputs, which means it cannot be level 2. This needs to be clarified. Also, what is the difference between rank and level? They seem the same, in which case only one of those terms should be used.

Minor comments:

In describing types of models in the introduction, the authors leave out stochastic implementations of biochemical reactions. I'm wondering why these are excluded.

Line 243 - explaining agony: If r has to be not too small to avoid r-blocks with 1 species, why do the authors use models with r equal to one?

Line 266: There appears to be a verb missing in this clause

Line 348-349: In quantifying number of experiments and demonstrating parameters are under-constrained, does that take into account that some experiments measure multiple time points? It should, since comparing multiple time points doesn't increase number of parameters, though I still expect the parameters to be under-constrained.

FIg 4: Text in this figure needs to be much bigger.

Reviewer #3: Summary:

The manuscript “Hierarchical optimization of biochemical networks” by Viswan et al. introduces HOSS, a method to break down large system models into individual pathway blocks organized in a nested hierarchy, allowing for efficient parameter optimization at each level. However, there are several key issues that limit the overall usefulness of the work. One of the key issues is that it is unclear how the hierarchical decompositions handle a signaling network with significant feedback mechanism. In my opinions, feedback mechanisms are almost always present in biological signaling pathways and play crucial roles. If HOSS can only effectively break down system without feedback, it significantly diminishes the utility of the method.

Questions:

In line 210, authors wrote, “The autonomy of these resulting blocks is only approximate, but it allows us to benefit from hierarchical optimization.” What exactly does HOSS do to approximate signaling pathways with feedback? Does HOSS ignore the feedback mechanisms, break down the signaling pathways, and optimize the parameters in each block independently, then reintroduce the feedbacks to estimate their parameters, fix those feedback parameters, and then re-optimize the parameters in each block, iterating this process?

What are the specific benefits of hierarchical optimization when feedback mechanisms are present? Apart from reducing Wallclock time, which is not a major concern for me since estimating hundreds of parameters typically takes a few hours at most, what other advantages does this approach provide?

For example, in Fig 1., if there are two feedbacks loops from F3 to B0 and A0, can HOSS still break down the signaling diagram? How would HOSS approach optimizing the parameters in this scenario?

If the feedback in the diagram is critical to the system and my goal is to estimate the parameters within the feedback loops, what are the advantages of using HOSS? How does the HOSS approach differ from a flat optimization approach is this context?

Another issue concerns the reaction rate vector. Are the reactions limited to zero-order, first-order, or second-order reactions? Are the reaction rates constrained to be the product of reactants with the parameters?

There is a subsection about “Signal back-propagation, reduced Michaelis-Menten mechanisms, and irreversible reactions.” The authors state in line 259, “As signaling pathways models often assume QSS, it is useful to have a tool that reduces mass action models by eliminating complexes.” Michaelis-Menten mechanisms can be important. Can HOSS retain the equation format for these mechanisms? For example, if parameters like ki+, ki-, and kcati are crucial for estimation based on the data, is HOSS capable of estimating these parameters, or does it resort to flat optimization in such cases?

In addition to Michaelis-Menten kinetics, other rate equations, such as Hill functions and sigmoidal functions, can be used to describe reaction rates. Can HOSS incorporate these diverse rate equation formats and estimate the parameters within these equations?

In Figure 3F, it seems that HOSS does not fit the experimental data.

In Figure 6C, it appears that the Flat approach yields a cost value of about 8, while the Hierarchical approach gives a cost value of around 2. However, it’s difficult to determine whether the difference of approximately 6 is significant. The authors should plot the simulation results from both the Flat and Hierarchical approaches alongside the experimental data. This would illustrate what a cost value difference of 6 looks like and clarify whether HOSS offers a real advantage over flat optimization.

In Figure 8D, HOSS optimization is compared to Flat. In D3_EGFR case, Flat performs the same as HOSS, if not slightly better. In D4_EGFR, Flat outperforms HOSS. In D3_b2AR and D4_b2AR, HOSS shows better results than Flat. Based on these outcomes, I don’t see significant benefits in using HOSS over Flat, especially since the improvement in cost value by HOSS is less than 0.05. I’m unsure how meaningful a 0.05 improvement is in this context. As mentioned earlier, the authors should plot the simulation results from both the Flat and HOSS approaches alongside the experimental data. I suspect that a 0.05 improvement in cost value may be indistinguishable when the simulation results are plotted.

The same issues apply to Figure 10B. How significant is an improvement in cost value of less than 0.1 when comparing Flat and HOSS? The authors should plot the simulation results from both the Flat and HOSS approaches alongside the experimental data to demonstrate the importance of a cost value improvement of less than 0.1.

Overall, if the HOSS cannot preserve feedback loops and limits the type of rate functions, but fails to provide a substantial improvement in cost value, it is difficult to justify its advantage over a straightforward flat optimization method.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Revision 1

Attachments
Attachment
Submitted filename: Response_To_Reviewers.pdf
Decision Letter - Anders Wallqvist, Editor, Marc R Birtwistle, Editor

Dear Prof. Bhalla,

Thank you very much for submitting your manuscript "Mathematical basis and toolchain for hierarchical optimization of biochemical networks" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Anders Wallqvist

Academic Editor

PLOS Computational Biology

Marc Birtwistle

Section Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #2: The authors have addressed all of my concerns

Reviewer #3: The authors have addressed most of my questions, but two remain unclear to me.

1. The first issue is regarding Figure 3F, which has been changed to Figure 2F in the revised version. In my view, HOSS does not fit the experimental data. The authors explained that “Here we show one plot out of an optimization that included multiple experiments with different profiles, as well as multiple other experiments. The presented plot is one out of a large number of experimental constraints that comprise this parameterization problem, and hence the fit to this plot may have been worsened to improve the overall optimization cost.” However, there are only two plots showing the fittings, Figure 2E and 2F. The authors need to provide the fitting results for several other “experiments with different profiles, as well as multiple other experiments” to support their explanation. I suggest adding about four panels showing different well-fitted protein level changes in addition to aEGFR and aMAPK. The authors need to provide fitting plots demonstrating which proteins are associated with the worsening of aMAPK, rather than vaguely referring to “multiple experiments with different profiles” or “multiple other experiments”. If only two plots are shown, it suggests that HOSS cannot fit both proteins simultaneously.

2. The second issue concerns Figure 7D. This plot confirmed my concern about the meaningfulness of the cost value improvement. In my view, there are no different in the fittings between the plain and hierarchical method for Erk2-pp and Mek1-pp. The hierarchical method might be slightly better than plain in Mos-P, as it avoids the incorrect upward trend. The authors also provided a new Figure 11 showing the distribution of cost values. D3_EGFR and D4 EGFR are nearly identical between HOSS and Flat methods. In the case of D3_b2AR and D4_b2AR, the HOSS method may show an average cost value improvement of 0.1, as shown in Figure 12B. Figure 7D is the only plot comparing the HOSS and Flat methods with the data. Although there may be a 0.1 difference in cost value as shown in Figure 11, Figure 7D demonstrates that this improvement is nearly indistinguishable when the simulation results are plotted. I suggest that author select 3 additional proteins that best highlight the differences between the HOSS and Flat methods in the optimization of reduced MAPK model case. Based on Figure 7D, I would conclude that the cost values for the HOSS optimization are nearly the same as those for the Flat optimization, rather than being better.

Overall, I agree that the advantages of HOSS in execution time and the superiority of modular approaches over direct methods stem from the significant increase in optimization complexity as the problem size grows. However, it is difficult to conclude that the numerical experiments illustrate a gain in precision. Figure 2E and 2F indicate that HOSS by itself cannot fit the data well, and Figure 7D shows that HOSS has not improved precision too much when compared with Flat method. As an optimization tool, I believe it is crucial to clarify how accurately HOSS can fit the data and how much it can reduce the cost values compared to the Flat method.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Revision 2

Attachments
Attachment
Submitted filename: response_to_reviewers2.pdf
Decision Letter - Anders Wallqvist, Editor, Marc R Birtwistle, Editor

Dear Prof. Bhalla,

We are pleased to inform you that your manuscript 'Mathematical basis and toolchain for hierarchical optimization of biochemical networks' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Marc Birtwistle

Section Editor

PLOS Computational Biology

Feilim Mac Gabhann

Editor-in-Chief

PLOS Computational Biology

Jason Papin

Editor-in-Chief

PLOS Computational Biology

***********************************************************

While some minor disagreements remain about some aspects of the paper, we feel that, given the broad relevance of the topic, it is better to accept and commence with post-publication review, rather than continue with more rounds of peer review.

Formally Accepted
Acceptance Letter - Anders Wallqvist, Editor, Marc R Birtwistle, Editor

PCOMPBIOL-D-24-01317R2

Mathematical basis and toolchain for hierarchical optimization of biochemical networks

Dear Dr Bhalla,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Lilla Horvath

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .