Peer Review History
| Original SubmissionOctober 24, 2024 |
|---|
|
Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jul 05 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Bojana Bukurov, M.D., Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript. 3. Thank you for stating the following in the Competing Interests section: “I have read the journal's policy and the authors of this manuscript have the following competing interests: Prof Grant Russell, A/Prof Jan Radford, and Prof Danielle Mazza have received honoraria from the RACGP for expert committee roles. The other authors have no conflicts to declare.” Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. 4. We note that you have indicated that there are restrictions to data sharing for this study. For studies involving human research participant data or other sensitive data, we encourage authors to share de-identified or anonymized data. However, when data cannot be publicly shared for ethical reasons, we allow authors to make their data sets available upon request. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Before we proceed with your manuscript, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., a Research Ethics Committee or Institutional Review Board, etc.). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. You also have the option of uploading the data as Supporting Information files, but we would recommend depositing data directly to a data repository if possible. Please update your Data Availability statement in the submission form accordingly. 5. In this instance it seems there may be acceptable restrictions in place that prevent the public sharing of your minimal data. However, in line with our goal of ensuring long-term data availability to all interested researchers, PLOS’ Data Policy states that authors cannot be the sole named individuals responsible for ensuring data access (http://journals.plos.org/plosone/s/data-availability#loc-acceptable-data-sharing-methods). Data requests to a non-author institutional point of contact, such as a data access or ethics committee, helps guarantee long term stability and availability of data. Providing interested researchers with a durable point of contact ensures data will be accessible even if an author changes email addresses, institutions, or becomes unavailable to answer requests. Before we proceed with your manuscript, please also provide non-author contact information (phone/email/hyperlink) for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If no institutional body is available to respond to requests for your minimal data, please consider if there any institutional representatives who did not collaborate in the study, and are not listed as authors on the manuscript, who would be able to hold the data and respond to external requests for data access? If so, please provide their contact information (i.e., email address). Please also provide details on how you will ensure persistent or long-term data storage and availability. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: Partly Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: No Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: No Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: This paper describes the psychometric properties of an instrument that measures patients experience with primary care. While the paper was interesting to read, I do have some questions and suggestions for improvements. In the introduction several psychometric properties from other studies are reported following the aim of the study. The last sentence gives a rationale for the study. I would recommend moving these reports from other studies to the method section and allowing the introduction to end in an aim. In the introduction I would rather see more information on the PCAT in general, when, how, and why was it developed for instance. It is described that it is one of the most widely used tools, that sounds great, but again, as a reader I need more information to be able to understand why and how this tool is useful. From this paper it is unclear where I can find information on the development and validation of the original PCAT-scale, and its short form. This should be clearly stated with a reference when the scale is first introduced, and again in the method section, making it possible for us reviewers and later also readers to read the original study, in order to understand the theory behind the instrument, how items have been selected into the final version, if it has been analyzed for convergent and discriminant validity and so on. Data analysis, when describing the test-retest reliability the retesting interval is much wider than the usually recommended 2 weeks. If this method is intended to indicate temporal stability, this needs to be more clearly stated. Why should assessments one year apart show such stability? Either remove these analyses or give justification, if possible. In the discussion it gets even more confusing, when authors try to describe why temporal stability was low. The idea of test-retest is that the latent variable should be stable over the time frame for reassessment, to enable conclusions on variability or stability of results. I think the paper would benefit from adding an analysis of item-scale correlations, to discern both convergent and discriminant validity. Perhaps this would make the inconclusive results from the factor analyses clearer and add important information on the subscales. In the result section, page 24, this CFA did show acceptable model fit indices? How does the results from these CFA compare to other studies? One subscale has inadequate number of items for CFA, how have other studies on this scale described this limitation? How can this have impacted the results? Should the analyses be adjusted in some way? Moreover, when reading the reference to the Vietnamese version, the letters describing each subscale differs, how come? For instance, culturally competent has the letter “J” in this paper, but “K” in the Vietnamese version, making comparisons confusing. I found the conclusion of this paper confusing. As the title suggest, this paper will describe the psychometric properties of the PCAT-S in a specified sample. Then the conclusion is that findings were mixed, and more research is needed, however the scale could be used in its current version in Australia. With reliability levels below recommended values, how would that impact interpretations of results? To the best of authors knowledge from reading validation studies from other countries, combined with the results from the current study, what are the next steps? From these results, what adaptations do you suggest, and how should these be addressed in future research? Reviewer #2: Summary of research The manuscript reports a psychometric analysis of the Primary Care Assessment Tool, Short Form (PACT-S), an existing instrument measuring patients’ experience with their primary care provider (PCP), in the general practice setting in Australia. The purpose of the study was to examine the factor structure and reliability of the PACT-S. The sample of 715 adults comprised a subgroup of participants in a larger RCT and was drawn from 34 practices across Australia. The sample included adults who were aged 64 or younger with chronic illnesses or who were age 65 or above. Analyses consisted of confirmatory factor analyses (CFA) to determine how well the data fit the factor structure proposed by the instrument’s authors and subsequent exploratory factor analyses (EFA). Analyses also included internal consistency reliability (Cronbach’s alpha) and test-retest reliability (Intraclass Correlation Coefficient; ICC). Two different models for imputation were used, in addition to a model with no imputation of missing values. Helpfully, the authors include item wording and response options for the full instrument and clearly label items from each subscale. The authors conducted all analyses correctly and transparently. Relevant decision steps in conducting the analyses were reported clearly, as were the results. Results are broken down by subscale in the discussion section, and possible interpretations and future steps provided for each. Recommendation The manuscript provides clear relevance to practice and clearly reports the data collection, data analysis, results, and interpretation. All analyses conducted appear reasonable and correct. The authors provide justification for each analytical decision. The work represents an important contribution to instrument validation which, as the authors point out, is essential for conducting research on health systems and health policies in cross-national contexts. The discussion is helpful for putting the results in context. The authors’ suggested interpretations and directions for future research are reasonable. I recommend publication but I hope the authors will consider my comments as I believe the manuscript can be strengthened if these are addressed. Minor concerns I appreciate the authors’ providing a rationale for using the neutral-value imputation scheme, bolstered by citing multiple prior publications that used this method (lines 188-191). Presenting three sets of results adds complexity that may not be of interest to all readers. Perhaps some information could be transferred to supplemental tables. The rationale for conducting EFA could be strengthened. The authors have already established the factor structure varies across different national health care contexts, based on the citations provided and the discussion on lines 94-104. If we already know the factor structure is not invariant across nations, do we need further evidence of this? This rationale could be clarified. Would it be possible to report what proportion of patients met inclusion criteria by virtue of their age alone (65 years or older) versus being in the 18-64 age group and having a chronic illness? This might be helpful when interpreting the mean (SD) age in Table 1; 66.9 years seems somewhat low and could indicate a substantial portion of the sample were younger than 65. It would be useful to more closely examine the data to identify causes for poor model fit (especially in the CFA). The sample size is inadequate for the model with no imputation but adequate for both imputation models, so isn’t a likely culprit. However, multicollinearity statistics would be helpful for the reader (and could be included in the supplemental materials). Reducing the number of factors for the EFA could also improve model fit. The list of exceptions seems rather lengthy when stating the factor structure of the EFA (e.g., lines 263-267). Beginning by pointing out five items (of 28) did not load as expected would frame things differently. The authors’ existing language is correct, but it caused me to mark the margin with, “that’s an awful lot of exceptions!” Including only participants in the control arm in the test-retest analysis is entirely appropriate. Is it possible to state the mean (median, range) length of time elapsed between the two measurement points? This may have been reported elsewhere but it’s key to interpreting the test-retest results. The authors state reducing the number of factors is beyond the scope of this study—fair enough. However, the rationale was that changing the number of factors would be more appropriate for “face and content validity assessments” (line 468). Contrast this with the Discussion section, which seems to discuss the face validity of numerous individual items. (The term ‘face validity’ is not used, but that seems to be what’s happening.) Some of this discussion seems to go beyond the data. Rewording or reframing could resolve this apparent discrepancy. The authors reference “poor temporal stability (α = 0.32).” This value is the ICC reported in Table 7. Would it be clearer to use rho in place of alpha, here? Is it possible to rule out having changed one’s GP as an explanation for the low test-retest reliability of the “First contact – Utilization” subscale? The authors posit changing providers as a potential explanation (lines 357-361). It’s possible the larger RCT has data on whether a participant switched practices. Did participants in the control arm have a shorter duration of relationship with the GP (or practice) than those in the treatment arm at baseline? Given that more than three-quarters of participants had been with their GP (or practice) over 5 years, changing providers doesn’t seem an especially compelling explanation for the low ICC on this subscale. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: Yes: Susan L. Schoppelrey ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Dear Dr. Bui, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Nov 23 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Bojana Bukurov, M.D., Ph.D. Academic Editor PLOS ONE Journal Requirements: 1. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: No Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: The authors have really made an effort in addressing and responding to all comments. In the introduction, it is clear which references to look at for scale development and validation studies. Nice summary of results, if I understand your statement correctly, the scale could be used in Australian general practice however “results should be interpreted with nuance, and consideration of item relevance and interpretation in the Australian context”. It is clearly stated what further refinements and analyses are needed. However, I do have some additional questions. I do agree with the authors that temporal stability is important for longitudinal research. However, I am still not convinced that these analyses address test-retest reliability. The idea with test-retest analysis is that the latent variable is assumed to be stable between measurement occasions, thus differences found between measurements indicate errors in the scale. High test-retest reliability indicates that these errors are negligible. In the present study, why is it assumed that the latent variable should be stable? In longitudinal research, sensitivity to change is perhaps even more important than temporal stability. Could these results imply that the instrument is sensitive enough to detect changes? When looking at table 2, it seems there are high proportions of responses in the highest possible score. The set ceiling effect of above 80% is quite high, how did authors arrive at this level for indicating ceiling effect? Authors have made it clear how this could have impacted the results, and the selected estimator DWLS is appropriate for cases where ceiling effects are high, however I am unfamiliar with the above 80% rule. Please review the use of periods and commas throughout the text, and typos, for example: Page 3, line 54, “… and health care providers. and is …” Page 6, line 139: “… effects were defined as items wwhere > 80% of ...” Page 20, line 316, “… and to provide a comparison across the three imputation samples.Results …” Reviewer #2: The commnts from both reviewers were addressed within the revised manuscript. Three concerns remain although I recommend publication without requiring a third round of review. (Note the line numbers refer to the markup version.) 1. The authors should consider citing a source on statistical methods to support the rationale for "conducting an EFA following CFA" (lines 259-260). 2. I echo Reviewer 1's concerns regarding test-retest reliability. I appreicate few studies have evaluated this aspect of the instrument (lines 301-303). The authors note a single test-retest including only 15 respondents, which could indicate the need for further validation (test-restest reliability) of the scale. Still, even that small analysis was conducted over a two-week timeframe rather than a year. Given the low test-retest reliability for the First Contact-Utilization subscale, how can this establish temporal stabliity in a way that's useful for future longitudinal research? No one study can "do it all." If the design of the larger study was such that test-restest reliability was unable to be assessed, I suggest dropping the test-retests analyses form this manuscript. 3. A careful proofreading (and perhaps professional copyediting?) is strongly recommended. Some redudant (or nearly-redundant) language was introduced by the revisions. One example of this near-reptition occurs in lines 655-656: "...reinforcing the PACT's value as a tool for this purpose. This highlights the value of the PACT as a potential tool for this purpose." ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: Yes: Maria Fogelkvist Reviewer #2: Yes: Susan L. Schoppelrey ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org |
| Revision 2 |
|
Psychometric properties of the Adult Primary Care Assessment Tool Short form (PCAT-S) among high-risk patients in Australian general practice PONE-D-24-43966R2 Dear Dr. Chau Minh Bui, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support . If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Bojana Bukurov, M.D., Ph.D. Academic Editor PLOS One Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: No Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: Thank you for addressing all comments , I have no further questions. I recommend publication of the manusript. Reviewer #2: (No Response) ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: Yes: Maria Fogelkvist Reviewer #2: Yes: Susan L. Schoppelrey ********** |
| Formally Accepted |
|
PONE-D-24-43966R2 PLOS One Dear Dr. Bui, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Ass. prof. Bojana Bukurov Academic Editor PLOS One |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .