Peer Review History

Original SubmissionOctober 10, 2024
Decision Letter - Sameena Naaz, Editor

PONE-D-24-45416An novel cloud task scheduling framework using hierarchical deep reinforcement learning for cloud computingPLOS ONE

Dear Dr. cui,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jan 03 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Sameena Naaz

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and 

3. We suggest you thoroughly copyedit your manuscript for language usage, spelling, and grammar. If you do not know anyone who can help you do this, you may wish to consider employing a professional scientific editing service.  

The American Journal Experts (AJE) (https://www.aje.com/) is one such service that has extensive experience helping authors meet PLOS guidelines and can provide language editing, translation, manuscript formatting, and figure formatting to ensure your manuscript meets our submission guidelines. Please note that having the manuscript copyedited by AJE or any other editing services does not guarantee selection for peer review or acceptance for publication. 

Upon resubmission, please provide the following: 

● The name of the colleague or the details of the professional service that edited your manuscript

● A copy of your manuscript showing your changes by either highlighting them or using track changes (uploaded as a *supporting information* file)

● A clean copy of the edited manuscript (uploaded as the new *manuscript* file)

4. Thank you for stating the following financial disclosure: Key Realm R&D Pro-gram of Guangdong Province(2021B0707010003); National Natural Science Foundation of China (62273109); Guangdong Basic and Applied Basic Research Foundation (2022A1515012022, 2023A1515240020, 2023A1515011913); Key Field Special Project of Department of Education of Guangdong Province (2024ZDZX1034); Maoming Science and Technology Project (210429094551175, 2022DZXHT028, mmkj2020033); Projects of PhDs’ Start-up Research of GDUPT (2023bsqd1002, 2023bsqd1013, XJ2022000301); Special Innovation Projects for Ordinary Universities in Guangdong Province in 2023 (2023KTSCX086).  

Please state what role the funders took in the study.  If the funders had no role, please state: ""The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."" 

If this statement is not correct you must amend it as needed. 

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper proposes a novel cloud task scheduling framework using hierarchical deep reinforcement learning (DRL) to address the challenges of large-scale task scheduling in cloud computing. The framework defines a set of virtual machines (VMs) as a VM cluster and employs hierarchical scheduling to allocate tasks first to the cluster and then to individual VMs. The scheduler, designed using DRL, adapts to dynamic changes in the cloud environment by continuously learning and updating network parameters. Experimental results demonstrate that this approach effectively balances cost and performance, optimizing objectives such as load balance, cost, and overdue time. One potential shortcoming of the proposed hierarchical deep reinforcement learning (DRL) framework for cloud task scheduling could be its complexity and computational overhead. Implementing and maintaining a DRL-based scheduler requires significant computational resources and expertise in machine learning. Additionally, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency.

How this could be handled in the proposed methodology?

Furthermore, the framework’s performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.

Contributions could be highlighted in the introduction sections

The quality, symmetry and the dimensions of figures needs to be improved and enhanced for formal presentation.

The methodology section is not distinguishing that how this particular technique is better than existing work.

The methodology section hardly cites any literature for the formulae and state-of-the-art techniques.

Some additional recommendations are as follows:

A. Paper needs to be revised for grammatical errors and typos.

B. Article needs to be proofread from the native English speaker or rewritten in the academic writing.

C. The abstract needs to showcase the numerical finding of the research study to reflect the contribution in the field.

D. The introduction needs to be providing rationale of the study, and brief literature review of existing studies, which is hard to differentiate in its current form.

Reviewer #2: 1 How the hierarchical deep reinforcement learning helps in scheduling of tasks in cloud paradigm?

2 Abstract should be concise and accurate.

3. What is the motivation behind cost, load balancing and how it will be handled by your scheduler and explain about its impact.

4. There are no contributions written by the authors in the article.

5. Motivation statements are not written in the manuscript.

6.Literature review is poor and research gap is not identified properly.

7.Deep Reinforcement learning technique was not properly mapped with respect to scheduling process.

8. Mathematical modelling is weak in the manuscript

9. Algorithm is design is not upto the mark

10.Result discussion is poor

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

Revision 1

Reviewer #1:

Comment 1: The paper proposes a novel cloud task scheduling framework using hierarchical deep reinforcement learning (DRL) to address the challenges of large-scale task scheduling in cloud computing. The framework defines a set of virtual machines (VMs) as a VM cluster and employs hierarchical scheduling to allocate tasks first to the cluster and then to individual VMs. The scheduler, designed using DRL, adapts to dynamic changes in the cloud environment by continuously learning and updating network parameters. Experimental results demonstrate that this approach effectively balances cost and performance, optimizing objectives such as load balance, cost, and overdue time. One potential shortcoming of the proposed hierarchical deep reinforcement learning (DRL) framework for cloud task scheduling could be its complexity and computational overhead. Implementing and maintaining a DRL-based scheduler requires significant computational resources and expertise in machine learning. Additionally, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency. How this could be handled in the proposed methodology? Furthermore, the framework’s performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.

Answer: Thanks for good advice. The shortcomings has revised in ABSTRACT and Conclusions.

ABSTRACT: There are still shortcomings in the method used in this article. Firstly, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency. Furthermore, the framework's performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.

Comment 2: Contributions could be highlighted in the introduction sections

Answer: Thanks for good advice. Contributions has been highlighted in the introduction sections

This research proposes an innovative Hierarchical Intelligent Task Scheduling framework (HITS) based on the hierarchical deep reinforcement learning algorithm to address the challenge of large-scale task scheduling in cloud computing. Compared with traditional methods, HITS exhibits significant advantages. Firstly, through hierarchical partitioning and effective manipulation of the solution space, it accelerates the task scheduling process and simultaneously optimizes the task overdue time and cost, which is particularly crucial in large-scale task scheduling scenarios. Secondly, the model structure and return function of deep reinforcement learning are meticulously designed in accordance with the dynamic characteristics of the cloud environment. In response to the dynamic variation in the number of virtual machines, by modeling the Gaussian distribution of relevant features and using it as state information, the model can adaptively adjust. For different load conditions, a unique reward function is designed, which feeds back rewards based on the load to drive the model to learn corresponding decision-making strategies, thereby achieving efficient and intelligent task scheduling in the complex and variable cloud environment.

Comment 3: The quality, symmetry and the dimensions of figures needs to be improved and enhanced for formal presentation.

Answer: Thanks for good advice. The figures throughout the text have been revised.

Comment 4: The methodology section is not distinguishing that how this particular technique is better than existing work.

Answer: Thanks for good advice. The following content has been added to Chapter Three of Materials and methods.

Advantages of the Hierarchical Deep Reinforcement Learning Technology Employed in This Research over Existing Works:

Advantages of the Hierarchical Architecture: Compared with traditional single-layer task scheduling methods, the hierarchical architecture of HITS can effectively reduce the complexity of the problem. By dividing the task scheduling process into two levels, name-ly the cluster level and the virtual machine level, the decision space at each level is dimin-ished, and the scheduling efficiency is enhanced. In cluster-level scheduling, clusters suitable for task processing can be rapidly screened out, avoiding a global search among all virtual machines, thereby significantly shortening the task allocation time. Meanwhile, this hierarchical approach is also conducive to resource management and optimization, better balancing the loads among different clusters and virtual machines and improving resource utilization.

Adaptability of the Deep Reinforcement Learning Model: The deep reinforcement learning model in this research, through a meticulously designed state space and return function, demonstrates remarkable adaptability to the dynamic changes of the cloud environment. Unlike traditional rule-based or heuristic scheduling algorithms, the deep reinforcement learning model can automatically learn and adapt to the dynamic changes of tasks and resources in the cloud environment. For instance, by modeling the Gaussian distribution of changes in the number of virtual machines, the model can promptly perceive the increase or decrease of virtual machine resources and adjust the task allocation strategy accordingly. When confronted with different load situations, the unique reward function can guide the model to make a reasonable trade-off between task overdue time and cost, thereby achieving satisfactory scheduling performance under various complex load conditions.

Learning and Optimization Capabilities: The deep reinforcement learning model possesses powerful learning and optimization capabilities. Compared with traditional static scheduling algorithms, it can continuously learn in the process of ongoing task scheduling and constantly optimize its own scheduling strategy. Through techniques such as experience replay and target network, the model can effectively utilize historical empirical data for learning, avoid getting trapped in local optimal solutions, and gradually converge to a more optimal scheduling strategy. Such learning and optimization capabilities enable the HITS framework to continuously adapt to the changes of the cloud environment and continuously improve the efficiency and quality of task scheduling during long-term operation.

Comment 5: The methodology section hardly cites any literature for the formulae and state-of-the-art techniques.

Answer: Thanks for good advice. Add the corresponding citations of Reinforcement Learning and Double Deep Q-Network (DDQN) and Playing Atari with Deep Reinforcement Learning. Add four formulae in the methodology section.

30. R. S. Sutton, A. G Barto. Reinforcement Learning: An Introduction. IEEE Transactions on Neural Networks, 1998, 9(5):1054.

31. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, & M. Riedmiller. Playing Atari with Deep Reinforcement Learning. Computer Science, 2013.

Comment 6: Paper needs to be revised for grammatical errors and typos.

Answer: Thanks for good advice. The grammatical errors and typos of paper has been revised by native English speakers.

Comment 7: Article needs to be proofread from the native English speaker or rewritten in the academic writing.

Answer: Thanks for good advice. The paper has been revised by native English speakers.

Comment 8: The abstract needs to showcase the numerical finding of the research study to reflect the contribution in the field.

Answer: Thanks for good advice. The abstract has been revised.

With the increasing popularity of cloud computing services, their large and dynamic load char-acteristics have rendered task scheduling an NP-complete problem. Aiming at the problem of large-scale task scheduling in cloud computing environment, the paper proposes a novel cloud task scheduling framework using hierarchical deep reinforcement learning (DRL) to address the challenges of large-scale task scheduling in cloud computing. The framework defines a set of virtual machines (VMs) as a VM cluster and employs hierarchical scheduling to allocate tasks first to the cluster and then to individual VMs. The scheduler, designed using DRL, adapts to dynamic changes in the cloud environments by continuously learning and updating network parameters. Experiments demonstrate that it skillfully balances cost and performance. In low-load situations, costs are reduced by using low-cost nodes within the Service Level Agreement (SLA) range; in high-load situations, resource utilization is improved through load balancing. Compared with classical heuristic algorithms, it effectively optimizes load balancing, cost, and overdue time, achieving a 10% overall improvement. Experimental results demonstrate that this approach effectively balances cost and performance, optimizing objectives such as load balance, cost, and overdue time. One potential shortcoming of the proposed hierarchical deep reinforcement learning (DRL) framework for cloud task scheduling could be its complexity and computational overhead. Implementing and maintaining a DRL-based scheduler requires significant computational resources and expertise in machine learning. There are still shortcomings in the method used in this article. Firstly, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency. Furthermore, the framework's performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.

Comment 9: The introduction needs to be providing rationale of the study, and brief literature review of existing studies, which is hard to differentiate in its current form.

Answer: Thanks for good advice. The introduction has been revised.

Cloud computing is a resource delivery and usage model. Service providers integrate a large number of nodes into a unified resource pool through virtualization technology, and users obtain the required computing resources through the network [1]. Cloud computing, as one of the core infrastructures in the current field of information technology, faces increasing pressure in task scheduling with the rapid development of big data, the Internet of Things, and 5G technologies. Task scheduling is an important research direction in cloud computing, whose essence is to reasonably allocate user requests to computing nodes for processing. However, this generates a huge solution space, and the optimal solution can-not be obtained in polynomial time, so the task scheduling of cloud computing is an un-certain NP problem [2,3].

Traditional task scheduling methods, whether heuristic algorithms [4] based on simple rules or some metaheuristic algorithms [5], exhibit numerous limitations when dealing with large-scale and dynamically changing cloud tasks [6]. For instance, heuristic algorithms often lack adaptability to complex environmental changes and have difficulty in flexibly adjusting scheduling strategies under different load and resource conditions. Although metaheuristic algorithms can perform global optimization to a certain extent, they have complex parameter settings and high computational overhead, making them difficult to be effectively applied in cloud task scheduling scenarios with high real-time requirements.

Many researchers have studied this problem and proposed heuristic and metaheuristic algorithms to solve it. But the actual cloud computing environment is complicated and dynamic, and traditional methods cope poorly with it. Re-searchers are using reinforcement learning (RL) and deep reinforcement learning (DRL) for learning capabilities to solve the dynamic scheduling problem of cloud computing [7-9]. Due to the diversity of user requests and resources, different quality of service (QoS) constraints must be simultaneously met, and how to respond to large-scale user requests while meeting the requirements of cloud service providers is an urgent problem. Intelligent scheduling algorithms are essential to overcome the difficulties of large-scale task scheduling. In this research, a hierarchical intelligent task scheduling framework (HITS) based on a hierarchical DRL algorithm is proposed. In the scheduling framework, a col-lection of VMs is called a VM cluster. When the framework receives a task request, it allocates the task to a cluster, and then to a VM via the task scheduler inside the cluster. We apply DRL technology to the scheduler, and through the design of the state space and re-turn function of each layer, it can adapt to the dynamic changes of the cloud computing environment, and adjust its scheduling strategy through continuous learning.

Based on these current situations, we propose to adopt the hierarchical deep reinforcement learning technology to address the cloud task scheduling problem. Deep reinforcement learning has powerful learning capabilities and adaptability to complex environments. It can automatically optimize scheduling strategies through continuous interaction and learning with the cloud environment. The hierarchical architecture helps to decompose large-scale problems into manageable sub-problems, improving decision-making efficiency and the system's scalability. We expect that through this innovative method, it is possible to meet the cost control requirements of cloud service providers while providing users with more efficient and reliable services, achieving a comprehensive improvement in multiple aspects such as performance, cost, and flexibility in cloud computing task scheduling, filling the gaps of traditional methods in handling large-scale and dynamic cloud task scheduling, and promoting the further development and application of cloud computing technology in the modern information technology system.

Cost and load balancing are two crucial objectives in cloud task scheduling. From the perspective of cost, cloud service providers need to reduce the cost of resource usage as much as possible to enhance profit margins while meeting user requirements. Different types of virtual machines have diverse cost structures, including computing cost, storage cost, and bandwidth cost. Our scheduler, through the deep reinforcement learning algorithm, comprehensively considers the resource requirements of tasks and the cost characteristics of virtual machines during the task allocation process. For example, when a task arrives, the scheduler evaluates the idle resource situation and the corresponding cost of the virtual machines within each cluster and preferentially assigns the task to the combination of virtual machines or clusters that can meet the task requirements and have a lower cost. Such an approach can effectively reduce the overall cost of task execution and improve the cost-effectiveness of resources.

For load balancing, the motivation is to ensure that the utilization rates of various resource nodes (clusters and virtual machines) in the cloud environment are relatively balanced and avoid situations where some nodes are overloaded while others are idle. This not only helps to improve the overall performance and stability of the system but also extends the service life of hardware devices. In the decision-making process, our scheduler takes the load situations of clusters and virtual machines as important state information and inputs it into the deep reinforcement learning model. By designing a reasonable return function, positive rewards are given to scheduling decisions that can achieve load balancing, and vice versa. For example, when the standard deviation of the virtual machine loads within a cluster is small, indicating a relatively balanced load, the scheduler tends to continue assigning tasks to this cluster. When the load of a certain virtual machine is too high, the scheduler will consider assigning subsequent tasks to other virtual machines or clusters with lighter loads, thereby dynamically adjusting the task allocation strategy to achieve load balancing of resources in the cloud environment and reducing performance bottlenecks and resource waste caused by uneven loads.

Attachments
Attachment
Submitted filename: Response to Reviewers.docx
Decision Letter - Sameena Naaz, Editor

An novel cloud task scheduling framework using hierarchical deep reinforcement learning for cloud computing

PONE-D-24-45416R1

Dear Dr. cui,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Sameena Naaz

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

The manuscript can be accepted for publication

Formally Accepted
Acceptance Letter - Sameena Naaz, Editor

PONE-D-24-45416R1

PLOS ONE

Dear Dr. Cui,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Sameena Naaz

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .