Peer Review History
| Original SubmissionJuly 20, 2025 |
|---|
|
Dear Dr. Mizuhara, plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.... We look forward to receiving your revised manuscript. Kind regards, Lutz Bornmann Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. 3. Please note that PLOS One has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 4. Thank you for stating the following in your Competing Interests section: “No” Please complete your Competing Interests on the online submission form to state any Competing Interests. If you have no competing interests, please state "The authors have declared that no competing interests exist.", as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now This information should be included in your cover letter; we will change the online submission form on your behalf. 5. Thank you for uploading your study's underlying data set. Unfortunately, the repository you have noted in your Data Availability statement does not qualify as an acceptable data repository according to PLOS's standards. At this time, please upload the minimal data set necessary to replicate your study's findings to a stable, public repository (such as figshare or Dryad) and provide us with the relevant URLs, DOIs, or accession numbers that may be used to access these data. For a list of recommended repositories and additional information on PLOS standards for data deposition, please see https://journals.plos.org/plosone/s/recommended-repositories.... 6. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. [Note: HTML markup is below. Please do not edit.] Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: N/A Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: PONE-D-25-39531 This study investigates the conjunction of Japanese deep-tech startup success and the involved university researchers' research performance. The paper gives a very good outline of the problem and the justification for the specific approach that was chosen. Specifically the growth of Japanese university-originated start-ups is predicted with machine learning models trained with data on scientific publication metrics and grant acquisition of the researchers participating in the start-ups. The authors focused on tree-based ML over deep learning methods to faciliated results interpretability. The results indicate that specific research grant funding and citation indicators are predictive of start-up success. The article currently has these shortcomings, which the authors are requested to respond to. A limitation that should be more clearly addressed is that start-up success is operationalized across 4 cases by crossing different thresholds of received venture capital. This need not necessarily be closely related to commercial viability of the company. In particular, in three of the four cases, the threshold is 1 JPY. This seems to be a strangely low value to speak of company success. Another methodological choice is remarkable: "Next, a normalized sum was calculated for each researcher. For Positive startups, the researcher with the highest normalized sum was selected, while for Negative startups, the researcher with the lowest sum was chosen". By this data cleaning decision rule, it may happen that the data that enters the ML training is already biased and not representative because in a realistic scenario, you could not tell before hand which will be Positive and Negative start-up, so you could not make this decision. I highly recommend that either none of the researchers are removed as input information source or that the rule which to include and exlude is the same across all start-ups. Reviewer #2: Review of the paper: Identifying Researcher Characteristics Driving Growth in Japanese University-Originated Deep-Tech Startups: A Machine Learning Approach“, submitted to PlosOne. The paper deals with the characteristics of researchers that are one of the founders of a spin-off from Japanese universities, especially in deep-tech areas, and their contribution to growth The paper is well structured and well-written. The methods are mostly clearly described and are suita-ble to address the research question. However, some of the descriptions are a little short and scarce for readers that are not familiar with the data or the Japanese situation as such. The data sources and data selection are not sufficiently described. There might be selection effects. At least, the authors do not mention any counteractivities (to selection effects) and also do not sufficiently justify and explain their decisions on the selection. From 967 companies, 357 are linked to their DB and only 194 are finally analyzed. This is a huge filter that was applied here. Even within these companies, further selections on the researchers to be analyzed were made. What does this imply Furthermore, no counter-factual or control-group approach was implemented, which leaves the reader unsure about the solidity of the findings. In addition, the work is only scarcely embedded in the conceptual literature. For example, work by Lerner (2002) or Mazzucato (2013) on the role of public research for R&D and innovation or by Lockett et al. (2005), Meyer (2003), or Visintin and Pittino (2014) on academic spin-offs is not cited. Hence, I suggest to ask for major revisions before accepting the paper for publication in PlosOne. In more detail, I have the following comments and questions: Line 50: “Therefore, in this study we utilize Japan-specific data to analyze the relationship between deep-tech startup growth and the characteristics of involved researchers,…”. In the Japanese sys-tem, is there a difference to be expected between university spin-offs and other deep tech spin offs? Line 131: “The “University-Originated Venture Database” contained 967 listings, of which 357 startups linked to “STARTUP DB” by corporate number or other identifier had complete researcher data.” Did you conduct an analysis on the missing. Are these systematically biased in any of the relevant dimensions that can still be controlled without the STARTUP DB link? Line 134: “… startups founded from 2015 onward …“. Why another reduction of the number of startups and not use the pre-2015 start-ups as a control group? Line 137: “Although some university-originated startups have multiple involved researchers, each researcher was linked to only one startup.” This sentence is unclear. Were all researchers assigned, so when more than one was involved all of them have been included in the analysis? Line 141: “For Positive startups, the researcher with the highest normalized sum was selected, while for Negative startups, the researcher with the lowest sum was chosen. This approach was taken due to the reliance on self-reported data in the “University-Originated Venture Database,” which makes it challenging to objectively identify the most relevant researcher when multiple indi-viduals are involved.” Doesn’t‘ this lead to a tautology? Line 162: “To mitigate class imbalance, we downsampled the larger group to match the smaller group’s size via random sampling.” Which groups do you exactly refer to? Line 249: “In summary, key characteristics of researchers contributing to the growth of university-originated startups include securing a cumulative KAKENHI grant amount of at least 10 million yen as a principal investigator, and/or being the author of papers with JIF ≥20 in the biomedical field or JIF ≥10 in the non-biomedical field.” However, what should not implied by the model anal-ysis is that it also holds the other way around. A high cumulative budget and high JIFs do not re-sults in start-ups. Line 273: “... a representative sample …” How can this be representative, if it is very selective? Line 281: “This study aimed to develop an evaluation model…”. I do not think this is an evaluation, but it is rather an identification model. Causality is not analyzed at all and is unclear. There could be selection effects in the way that only highly visible researchers are (self) selected to contribute to spin-offs. If they do perform good in the founding activity is not analyzed, so it is not a perfor-mance evaluation. Line 292: “The evaluation model and insights obtained through this study are expected to contrib-ute to the development of a more robust decision-making foundation for deep-tech investment.” I think the conclusions are a little short and too general. At least, putting them in perspective with current practices or program goals would have been appreciated. ********** what does this mean?). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our Privacy Policy..--> Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.. Please note that Supporting Information files do not need this step.. Please note that Supporting Information files do not need this step.. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Dear Dr. Mizuhara, plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.... We look forward to receiving your revised manuscript. Kind regards, Lutz Bornmann Academic Editor PLOS One Journal Requirements: If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. Additional Editor Comments: One reviewer still has several critical points that should be considered in another revision. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #1: Yes Reviewer #2: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: No ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: (No Response) Reviewer #2: Review of the revised paper: Identifying Researcher Characteristics Driving Growth in Japanese University-Originated Deep-Tech Startups: A Machine Learning Approach“, submitted to PlosOne (Revision 1). Thank you for giving me the opportunity to review the revised version of the paper. The authors have addressed the comments I made on the original submission. I think that the embedding in the conceptual literature is now sufficient. I am also satisfied with the extended version of the conclusion section, where the authors put the findings into perspective and took the specific Japanese context into account. However, I am still concerned about the data treatment and the massive filtering of the data. The authors have addressed my comments, but not to my full satisfaction. In their answers to my comment on the original paper they state: “Therefore, our conclusions remain valid under the bias-free sampling framework.” I strongly disagree that what they present in their paper is a bias-free sampling framework. This statement reflects the perspective of the authors and summarizes my still existing concerns. The authors cannot know, according to their analytical framework, if the observations they have are unbiased as they do not control for any biases. In addition, this is not a sample, but a selection of observations. Even if the selection is well grounded on the data, it is still a selection. As the selection criteria (e.g. data reliability) might be correlated with the outcome (dependent variable), it might be biased. Further treatment of the data on high and low performing researchers even increased my concerns. I also think that the analytical framework could be enhanced. The authors should consider a logistic regression analysis at the end of section 4 of their paper. This would maybe clarify a lot of open issues. The classification work can then be described as it is, but it is just the pre-analysis for the regression analysis. The regression would have startup growth (Negatives/Positives; 0/1) as the dependent variable and the explanatory variables would be company characteristics (e.g. startup from universities A, B, C, …; sectors: biomedical/else) as well as characteristics of the re-searchers (excellence; experience/scientific age, …). This would then allow for the reporting of significance levels and maybe even causal effects. In more detail, I have the following comments. Line 66-69: “We focus on university-originated startups not because we assume systematic differ-ences from other Japanese deep-tech startups, but because reliable, government-maintained datasets exist only for the university-originated subset, making it the sole nationally standardized source for analysis in Japan”. This statement is ok for here (the intro). Line 94: “This database contains detailed information on startups”. If this is really the case, then this information can be used for a “non-response analysis“ of the 610 companies that were excluded from the analysis. What are their core characteristics? Line 115/116: “… we filtered startups by the main technology field of their core product or service as listed in the Venture Database.” Please be more precise here how you filtered. I assume this is targeting the deep-tech companies? Line 113, Figure 1: The numbers you report in the text is 194 companies, which does not corre-spond to any of the four cases. The maximum is 190 so what happened to the 4 missing cases? In addition, given that some of the cases have really low numbers (102 and 66), did you conduct any outlier analyses? Outliers might have a high impact on the overall outcome. Line 153-155: “This reduction occurred because only those startups that explicitly disclosed their researchers in the original database and could be reliably matched to financial records were re-tained to ensure data consistency and to avoid missing values in explanatory variables.” While the reason for selection might be reasonable and convincing, the question still is what the effect on the outcome of the study is. At least some information is available on these 610, so why not analyzing them? Line 161/162: “Under this program, four major national universities—the University of Tokyo, Kyo-to University, Osaka University, and 162 Tohoku University—each established university-affiliated venture capital funds…” Ok, does this mean that the 194 startups selected for the analysis originated only in these four universities? It seems not as in figure 6 the top six universities are listed. However, why specifically mention these four universities here? In section 5 of the paper, it is mentioned that the top 6 universities were analyzed only. This has never been mentioned in text before section 5 (except for this figure 6). Readers are really left in the darkness concerning the selection and the observations used, which I think is not appropriate. Line 165/166: “Although some university-originated startups have multiple involved researchers, each researcher was linked to only one startup to establish a consistent one-to-one relationship between startups and researchers.” Please mention here how you assigned them. Furthermore, this sentence is still unclear. Are there more than one researcher per startup? This must be the case as 252 researchers are found in 194 companies. The other way around, one researcher could have been involved in more than one startup. Did you assign only one researcher to each compa-ny or did you assign one company only to each researcher? How did you aggregate the researcher data (citations, JIF, …)? Did you calculate the simple average of multiple researchers involved in one company or did you calculate a weighted average? In section 5 (see comments below) you mention that you assigned only one researcher to each of the startups by selecting the top-performer for Positives and low-performer for Negatives. Line 191-194: “To construct a consistent one-to-one dataset for modeling, one researcher was randomly selected per startup. This random sampling was repeated five times, and the average model performance across these iterations was used to evaluate the model’s stability. This ap-proach ensured that model evaluation did not depend on any particular random draw of researchers from startups with multiple participants.” How many cases of multiple researchers did occur? If you draw a random sample of 236 researchers in 194 companies, the variation is small. Deriving a model stability and a quality check out of this is very ambitious. Line 199: “We used 5-fold cross-validation…” the metrics used here are not validation measures, but evaluation/test measures. Given the low numbers of cases the split between learning and testing sample are already questionable, but calling these metrics cross-validation is not appropriate. In addition, these metrics are related. E.g. recall and precision constitute F1-Score… Line 214-219: “As shown in Table 4, the model’s performance improved as the distinction between the compared groups became clearer. Accuracy, Precision, and F1 scores increased when the funding threshold was raised from Case 1 to Case 2, indicating that a larger gap in funding scale enhanced the model’s ability to discriminate between growing and non-growing startups. Furthermore, when the data were stratified by technology field (Cases 3 and 4), both Accuracy and AUC values were higher than in the unstratified cases, suggesting that separating the dataset into biomedical and non-biomedical domains contributed to more precise classification performance.” It seems obvious that differences between branches are to be found. In economics, the sector of economic activity is a standard control variable. Given the low number of observations here, a bilateral differentiation seems the only option, but is far from being a statistically relevant control for sector differences. Line 245: “…their relative contribution…”. What does relative contribution mean? Do you mean correlation? If so, then name it that way. Line 267: “… Funding Amount, Maximum Funding Amount, Average Funding Amount,…”. As these are just variations of the same variable, correlations cannot be interpreted as a quality indication. Line 269-271: “… JCI-related variables, and h-index were more strongly correlated with each other than with quantity-based variables such as the number of papers or number of co-authors.” Also the correlations between citation-based indicators, citation rate or JIF, cannot be interpreted as a sign of quality of the analysis. They are indicators of the same conceptual category (namely research excellence) and hence multicollinear. Line 281/282: “… Positive data, and the one with smaller values for Negative data, resulting in a one-to-one correspondence between each startup and researcher.” This comment is related to my comment above (line 165). The other reviewer had already raised this issue, but I want to re-emphasize this. I think this is a questionable approach as the good performers were taken for the good performing startups and the bad performers for the bad performing startups. The selection criteria should be the same for both as this might otherwise self-enforce the intended outcome of the study, namely bad performers perform bad and good performers good. Line 313: “These two variables played a central role in explaining startup growth within the model framework.” The term “explaining” implies a causal relationship, while the interpretation should be on correlations. Line 344/345: “In Cases 1 and 2, where no field distinction was made, publication-quality variables exhibited less explanatory power.” This might ask for a field-specific modelling of the citation-based indicators (e.g. field-normalized citation rate). Line 367: “… performance of top six universities…” This is the first time I became aware of the fact that the analyses are restricted to top 6 universities. They were listed in Figure 6, but it was never mentioned explicitly in the text. No justification is given why only these six universities have been analyzed. ********** what does this mean?). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our Privacy Policy..--> Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation. NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications. |
| Revision 2 |
|
Identifying researcher characteristics driving growth in Japanese university-originated deep-tech startups: A machine learning approach PONE-D-25-39531R2 Dear Dr. Mizuhara, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.... If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Lutz Bornmann Academic Editor PLOS One Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #2: Yes ********** Reviewer #2: I want to thank the authors for their conscientious consideration of the comments I made to the preveous version of the paper. They have addressed all my issues and concerns to my satisfaction and I suggest the paper to be accepted for publication PLOS One. I very much appreciate the efforts made by the authors and I hope they feel positive about the comments made and the issues raised. Frmo my point of view the paper gained in clarity and substance. ********** what does this mean?). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our Privacy Policy..--> Reviewer #2: No ********** |
| Formally Accepted |
|
PONE-D-25-39531R2 PLOS One Dear Dr. Mizuhara, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Lutz Bornmann Academic Editor PLOS One |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .