Peer Review History

Original SubmissionOctober 18, 2021
Decision Letter - Zhen Hua Hu, Editor

PONE-D-21-33293Achieving clinically optimal balance between accuracy and simplicity of a formula for manual use: development of a simple formula for estimating liver graft weight with donor anthropometricsPLOS ONE

Dear Dr. Gotoh,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Sep 01 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Zhen Hua Hu, MD, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please include your tables as part of your main manuscript and remove the individual files. Please note that supplementary tables (should remain/ be uploaded) as separate "supporting information" files.’

3. Please amend your current ethics statement to address the following concerns:

a) Did participants provide their written or verbal informed consent to participate in this study?

b) If consent was verbal, please explain i) why written consent was not obtained, ii) how you documented participant consent, and iii) whether the ethics committees/IRB approved this consent procedure.

4. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. 

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

5. Thank you for stating the following financial disclosure: "This work was supported with grants from the Japan Agency for Medical Research and Development and from Japan Society for the Promotion of Science (19K09111); MG received the grant."

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

If this statement is not correct you must amend it as needed. 

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

6. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

7. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript. 

8. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Major points:

The authors have generated a formula for liver weight estimates that balanced simplicity with accuracy. More complex formulas (Yu, Chouker, Lin, and Chan) may be affected by overfitting and super fitting. It is critical to justify why a new formula is needed over already existing simple, linear formulas (DeLand and North, Yoshizumi, Urata, Vauthey, and Hashimoto).

The homogeneity of the study population may decrease the generalizability. This also may have contributed to the decrease in “in-sample” vs “current cohort” correlation coefficients for previous studies as seen in Table 1. This should be addressed in the discussion section.

Minor points:

Consider using different shapes rather than or in addition to color to differentiate groups in your figures to aid readability for colorblind readers or when printed

Page 4, last sentence of abstract: you use the terms “simple linear model” and “univariable model.” These are considered interchangeable by most lay literature. If both models are truly simple linear, it would be less confusing to the reader to use consistent terminology. If there are differences in the formulas, then please add the univariable model using BSA to table 3 for comparison.

Table 1: Study by Yu et al, 2004 includes pediatric data

Figure 1: In the text, please note that you are focused on deceased donors so as to should justify rationale for exclusion of domino transplants as this is still a whole liver

Figure 3: Remove discussion of fitting the final model from your legends

Page 13: concept of votes is unclear and confusing. Consider “models with multiple independent variables were excluded if RMSE did not decrease by more than 5% from univariate models,” or please justify why this is not an accurate statement

Page 17-18: section “R2 and RMSE as measure of model fit” is more appropriate for the introduction

It might be valuable to provide some detail on the RMSE compared to R2

Reviewer #2: The manuscript by Ichihara N et al develops a simple formula to calculate the Whole-liver mass. This parameter is important to establish the size of reduced liver grafts, safety in living liver donors and, increasingly important, the limits of extreme liver resections. The authors arrive at a simple formula after an analysis of 129 donor-recipient pairs in a Japanese transplant database. A simple univariate linear regression formula is achieved, based on a linear relationship with body weight (BW). The authors perform a validation after training and a comparison with other metrics that address the same problem. The formula obtained is simple and easy to apply.

The article is interesting and its methodological design is basically correct. However, the manuscript requires various clarifications:

1. It is not true that the Whole-liver mass calculation problem is better done using simple metrics versus Deep-learning tools. These artificial intelligence tools have shown greater superiority in the field of prediction than statistical tools, although they require a greater inclusion of cases for training and subsequent validation. Although some of these classifiers are black-box models (for example, artificial neural networks) in which the predictor variables are not known, there are other classifiers such as Random Forest, which allow the variables and their relative risk to be known, especially in databases. small data sets, like the one used in this article.

2. The sample size necessary for reliability is not specified. JLTS transplants since 2012 are analyzed to define the analysis cohort.

3. The results are only applicable to the database where the formula was created, since they do not include differential factors such as race, which can influence liver size.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

We appreciated the suggestive comments from the two reviewers that allowed us to refine the manuscript. After carefully reading the comments from the associate editor and the reviewers, we have revised the manuscript extensively. Our detailed point-by-point responses to your comments are in the appended letter. The comments from the reviewers are presented, followed by our response.

【 point-by-point response to the comments from reviewers】

【Comment from Reviewer 1】

The authors have generated a formula for liver weight estimates that balanced simplicity with accuracy. More complex formulas (Yu, Chouker, Lin, and Chan) may be affected by overfitting and super fitting. It is critical to justify why a new formula is needed over already existing simple, linear formulas (DeLand and North, Yoshizumi, Urata, Vauthey, and Hashimoto).

(Our response)

We appreciate the reviewer’s point that we first need to clarify why we need to develop a new estimation formula for whole liver weight while there are multiple published formulas for this purpose.

The critical reason for this was that (1) none of the previously published formulas have been cross-validated either for model selection or validation so their external validity (within the same population) was questionable and unclear, and (2) there was no comprehensive report on practical value of these formulas in populations different from the one with which they were developed. Thus, we (1) developed a formula with internal and external cross validation so its external validity was maximized and evaluated, and (2) evaluated fit of previously published formulas on our dataset to examine how they perform on a different population.

Although we didn’t explicitly mention in the manuscript, advancement of machine learning (ML) and complex statistics (SC) is an important background for this report. We expect that interest will grow in applying ML/CS tools in this and similar clinical scenarios. In light of such anticipated interest in combining “big data” (BD) and ML/CS methods, the authors believe research areas such as this deserves renewed attention with a focus on both their commonality with and also their uniqueness compared to the mainstream BD/ML/CS research. The authors believe different application areas deserves mathematical/computational methods tailored to their unique real world needs, not only borrowing from the largely commercially-led “mainstream” BD research.

Although existing reports on such estimation formulas provide fair guidance for future work, we believe some methodological updates are required for this field to live up to the promise of modern science:

1. Measures should be taken to avoid overfitting, e.g., (internal) cross-validation for model selection.

2. Envisioned use case of the estimation formula should be clarified, e.g., manual use (or calculation with a simple electronic calculator), calculation with a feature-rich electronic calculator (“scientific calculator”), computerized calculator that instantly responds to a user’s inquiry, or computerized estimation in settings where the computational load is not a concern.

3. (When the purpose of developing the formula is for manual use) measures should be taken to avoid selecting an excessively complex formula just to gain clinically negligible gain of accuracy, i.e., superfitting. Penalty on model complexity should be enforced as part of the criteria for model selection, i.e., the cost function.

4. Results of (External) cross validation should be done for estimating external validity of the final fitted formula (in the same population) and its results should be presented.

5. Ideally, performance of the formula in multiple completely different population should be assessed and presented. As this is difficult in most cases, we believe it’s a good practice to assess performance of previously reported formulas in the dataset available to the authors and present such results. Although this could make a separate manuscript, we believe this makes sense to present such results in a manuscript where a new formula is developed, because this is usually the best thing we can do to have an idea of how the newly developed formula performs in a completely different population.

6. The process taken to select the final model should be made clear. At least, the list of candidate models and how they compared against the one finally selected should be presented.

The authors believe these principles also apply to many other areas of applied science, including medicine.

2. The homogeneity of the study population may decrease the generalizability. This also may have contributed to the decrease in “in-sample” vs “current cohort” correlation coefficients for previous studies as seen in Table 1. This should be addressed in the discussion section.

(Our response)

We appreciate the reviewer’s suggestion.

To clarify, before this study, because there was no report on performance of existing estimation formulas on different populations from where they were developed, nothing was directly known about how such estimation formulas perform in different populations. However, the results here, summarized by Table 1, Fig 5, Fig 6, and S1 Fig, suggest what the reviewer states here is the case. We added the following to the Limitation section.

Regarding the applicability of the formula developed here to different populations, similar miscalibration as observed with previously reported univariable linear formulas in the current dataset, as summarized in by Table 1, Fig 5, Fig 6, and S1 Fig, is anticipated. Thus, developing an estimation formula optimized in a specific population may still be justified in future, unless a “universal” formula is developed and proven, which will require a far more diverse samples. In absence of such a “universal” formula, however, the authors believe the current study will provide guidance on model selection for developing a formula in a specific population using similar volume of data.

3. Consider using different shapes rather than or in addition to color to differentiate groups in your figures to aid readability for colorblind readers or when printed

(Our response)

We appreciate this comment, and modified all the color-containing figures so they can better be comprehended by the red-green color-blind and also the total color-blind. Colors were selected to provide consistent experience across those affected by color blindness and those who are not. (Now no figures contain red or green.) Specifically, the following changes were made:

• Figure 2 and S2: Modified coloring with varied saturation, varied marker shape, and legends containing color samples.

• Figure 5: Modified coloring with varied saturation.

• Figure S1: Modified coloring with varied saturation, and legends containing color samples.

Although varied maker shapes were not used in Figure 5 for some technical reasons, we believe this is now comprehensible even by those with color-blindness.

4. Page 4, last sentence of abstract: you use the terms “simple linear model” and “univariable model.” These are considered interchangeable by most lay literature. If both models are truly simple linear, it would be less confusing to the reader to use consistent terminology. If there are differences in the formulas, then please add the univariable model using BSA to table 3 for comparison.

(Our response)

We appreciate this comment and agree with the reviewer regarding this. We made the following change to improve clarity and flow. (The underlines are not present in the manuscript.)

Before change: A simple linear model using BW achieves a clinically optimal balance between simplicity and accuracy, while a univariable model using body surface area performed similarly.

After change: A univariable linear model using BW achieved a clinically optimal balance between simplicity and accuracy, while one using body surface area performed similarly.

We hope the reviewer finds this acceptable.

5. Table 1: Study by Yu et al, 2004 includes pediatric data

(our response);

Thank you for pointing our mistake. We carefully read the article reported by Yu et al. and found that the study included pediatric data. We changed the Table 1 (please see at page 7 in the revised manuscript).

6. Figure 1: In the text, please note that you are focused on deceased donors so as to should justify rationale for exclusion of domino transplants as this is still a whole liver

(our response)

Thank you for the comment. We added an explanation for the analysis cohort in the text to clarify this issue (please see at 11 page).The added sentence was described below.

“We used only data of whole liver grafts from deceased donors for the current study.”

7. Figure 3: Remove discussion of fitting the final model from your legends

(Our response)

We appreciate this suggestion and admit the need to streamline the legend for S2 Fig. We removed the paragraph describing how the final model is fitted.

8.Page 13: concept of votes is unclear and confusing. Consider “models with multiple independent variables were excluded if RMSE did not decrease by more than 5% from univariate models,” or please justify why this is not an accurate statement

(Our response)

We appreciate this comment, which points to insufficient description of “voting” in this manuscript. We realized that the relationship between the “inner” CV, “voting,” and “outer” CV was not adequately described.

To clarify, like a typical nested CV with two layers of iterations, one of which is nested within the other, the current algorithm involved three layers of iterations. The inner-most iteration was the “inner” CV, the one in the middle was the cycle for “voting,” and the outer-most iteration was the “outer” CV.

Thus, for selecting a single “vote,” a cycle of “inner” CV, which encompassed 10 x 10 = 100 iterations of fitting a model and measuring its CV RMSE, was conducted. The “vote” was decided by comparing the mean CV RMSE of each candidate model across this cycle.

In total, 100 of such “votes” were collected, and the candidate model with the largest number of “votes” were selected as the final model.

For assessing the external validity of the final model, this process was repeated 10 x 10 = 100 times through the “outer” CV cycle.

To clearly yet concisely describe this relationship, we added the following in the Selecting a model through “inner” CV section of the Materials and methods section. (The part with an underline, not present in the manuscript itself, represents the addition for clarification.)

Total of 100 “votes” were collected, each of which represened results of “inner” CV with a different subset. The candidate model with the largest number of “votes” was selected as the final model. The final model was fitted to the entire data to obtain the intercept and coefficient of the formula.

9.Page 17-18: section “R2 and RMSE as measure of model fit” is more appropriate for the introduction. It might be valuable to provide some detail on the RMSE compared to R2

(Our response)

We thank the reviewer for these suggestions. In the previous version, employment of RMSE first appeared in the Methods section without any preceding explanation on the nature of RMSE and purpose of its implementation, which were only described in the Discussion section. The definition of RMSE was also not explained anywhere.

We moved this section from the Discussion to the Introduction, renamed it as “Model accuracy measure for avoiding superfitting: RMSE,” added description of the definition of RMSE, and adjusted wordings of the subsequent part of the Introduction to incorporate the difference between R2 and RMSE.

We hope this makes the manuscript convey the logical flow more effectively.

【Response to the comment from Reviewer2】

1. Comments to the Author

1. It is not true that the Whole-liver mass calculation problem is better done using simple metrics versus Deep-learning tools. These artificial intelligence tools have shown greater superiority in the field of prediction than statistical tools, although they require a greater inclusion of cases for training and subsequent validation. Although some of these classifiers are black-box models (for example, artificial neural networks) in which the predictor variables are not known, there are other classifiers such as Random Forest, which allow the variables and their relative risk to be known, especially in databases. small data sets, like the one used in this article.

(Our response)

We appreciate this thoughtful comment, which points to interesting recent advancement in Machine Learning (ML) research.

The primary author of this manuscript has first-hand experience in using ML models in clinical context, as listed below in case:

1. Inohara T, Ichihara N, et al. The effect of body weight in infants undergoing ventricular septal defect closure: A report from the Nationwide Japanese Congenital Surgical Database. J Thoracic Cardiovasc Surg. 2019;157(3):1132-41.e7.

2. Nishioka N, Ichihara N, et al. Body mass index as a tool for optimizing surgical care in coronary artery bypass grafting through understanding risks of specific complications. J Thoracic Cardiovasc Surg. 2019.

3. Matsuoka T, Ichihara N, et al. Antithrombotic drugs have a minimal effect on intraoperative blood loss during emergency surgery for generalized peritonitis: a nationwide retrospective cohort study in Japan. World J Emerg Surg. 2021;16(1):27.

4. Ikawa F, Ichihara N, et al. Visualisation of the non-linear correlation between age and poor outcome in patients with aneurysmal subarachnoid haemorrhage. J Neurol Neurosurg Psychiatry. 2021;92:1173-80.

We agree that some ML models (we assume the “Deep-learning tools” here can be interpreted as ML models), e.g., Random Forest and XGBoost, may allow more accurate estimation in this context, if sufficient volume of observations are available. We also agree that their “black-box” nature can be practically overcome with interpretability tools, e.g., variable importance measures, SHAP, partial dependence plot, and LIME.

As described in the first paragraph of the Introduction, copied below, small volume of available data precluded use of ML models in this study. Also, we assumed simple linear formulas have their own use case different from ML models in this context.

Importance of simplicity in developing formulas for manual use

Despite recent advances in machine learning, there are still some clinical areas in which complex algorithms fail to provide clinically meaningful gain of accuracy compared to relatively simple formulas because of the limited availability of data and high variability of the subject, leaving traditional simple estimation/prediction formulas yet to be replaced. When developing a formula in such areas, in addition to predictive accuracy, simplicity and ease of use are major concerns because they are often used manually, and the complexity of the model might limit its use or lead to errors.

Thus, we focused on applying the methodological features established in modern statistics/ML, other than ML models themselves, e.g., cross validation, to this area. We hope this is agreeable to the reviewer.

2. The sample size necessary for reliability is not specified. JLTS transplants since 2012 are analyzed to define the analysis cohort.

(Our response)

We appreciate this comment on the nature of the dataset we used for this study. Admittedly, the volume of data used for this study was determined by availability, not requirement of the modeling approach and precision. While we did not explicitly describe it because the same is the case with most observational studies based on registry data, we agree that preparing the “necessary” volume of data would allow a greater flexibility in selecting modeling approaches, e.g., application of ML models, with higher precision.

3. The results are only applicable to the database where the formula was created, since they do not include differential factors such as race, which can influence liver size.

(Our response)

We appreciate the reviewer’s comment. Please see our response to Reviewer #1’s comment #2.

Attachments
Attachment
Submitted filename: response to reviewers_final.docx
Decision Letter - Sathishkumar V E, Editor

Achieving clinically optimal balance between accuracy and simplicity of a formula for manual use: development of a simple formula for estimating liver graft weight with donor anthropometrics

PONE-D-21-33293R1

Dear Dr. Gotoh,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Sathishkumar V E

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Well written and the strategy to achieve the balance between accuracy and simplicity is innovative, well explained and well executed

Reviewer #2: The work submitted by Ichihara et al is a manuscript that has resolved all the comments made by this reviewer. The limitations pointed out by the authors in their answers are convincing and typical of an analysis of these characteristics.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Javier Briceño

**********

<quillbot-extension-portal></quillbot-extension-portal>

Formally Accepted
Acceptance Letter - Sathishkumar V E, Editor

PONE-D-21-33293R1

Achieving clinically optimal balance between accuracy and simplicity of a formula for manual use: development of a simple formula for estimating liver graft weight with donor anthropometrics

Dear Dr. Gotoh:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Sathishkumar V E

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .