Peer Review History
| Original SubmissionDecember 1, 2024 |
|---|
|
Dear Dr. Sethi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Apr 21 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Fatemeh Farshad Guest Editor PLOS ONE Journal Requirements: 1. When submitting your revision, we need you to address these additional requirements.-->--> -->-->Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at -->-->https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and -->-->https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf-->--> -->-->2. We note that this data set consists of interview transcripts. Can you please confirm that all participants gave consent for interview transcript to be published?-->--> -->-->If they DID provide consent for these transcripts to be published, please also confirm that the transcripts do not contain any potentially identifying information (or let us know if the participants consented to having their personal details published and made publicly available). We consider the following details to be identifying information:-->-->- Names, nicknames, and initials-->-->- Age more specific than round numbers-->-->- GPS coordinates, physical addresses, IP addresses, email addresses-->-->- Information in small sample sizes (e.g. 40 students from X class in X year at X university)-->-->- Specific dates (e.g. visit dates, interview dates)-->-->- ID numbers-->--> -->-->Or, if the participants DID NOT provide consent for these transcripts to be published:-->-->- Provide a de-identified version of the data or excerpts of interview responses-->-->- Provide information regarding how these transcripts can be accessed by researchers who meet the criteria for access to confidential data, including:-->-->a) the grounds for restriction-->-->b) the name of the ethics committee, Institutional Review Board, or third-party organization that is imposing sharing restrictions on the data-->-->c) a non-author, institutional point of contact that is able to field data access queries, in the interest of maintaining long-term data accessibility.-->-->d) Any relevant data set names, URLs, DOIs, etc. that an independent researcher would need in order to request your minimal data set.-->--> -->-->For further information on sharing data that contains sensitive participant information, please see: https://journals.plos.org/plosone/s/data-availability#loc-human-research-participant-data-and-other-sensitive-data-->--> -->-->If there are ethical, legal, or third-party restrictions upon your dataset, you must provide all of the following details (https://journals.plos.org/plosone/s/data-availability#loc-acceptable-data-access-restrictions):-->-->a) A complete description of the dataset-->-->b) The nature of the restrictions upon the data (ethical, legal, or owned by a third party) and the reasoning behind them-->-->c) The full name of the body imposing the restrictions upon your dataset (ethics committee, institution, data access committee, etc)-->-->d) If the data are owned by a third party, confirmation of whether the authors received any special privileges in accessing the data that other researchers would not have-->-->e) Direct, non-author contact information (preferably email) for the body imposing the restrictions upon the data, to which data access requests can be sent-->--> -->-->3. Please include your tables as part of your main manuscript and remove the individual files. Please note that supplementary tables (should remain/ be uploaded) as separate "supporting information" files.-->--> -->-->4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.-->?> [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: This manuscript presents a validation study of the Healthcare Education Micro Learning Environment Measure (HEMLEM 2.0) in Pakistani medical and dental education contexts. While the study makes a potentially valuable contribution as a cross-cultural validation effort, several important methodological concerns need to be addressed. Firstly, I couldn't find any information about language handling. The manuscript does not specify whether the instrument was translated from English or administered in English. If translation was used, the standard procedures of forward translation, back translation, expert committee review and cultural adaptation should be detailed. If the English version was used, justification and consideration of potential language barriers should be discussed. Next, I noticed that the sampling approach is inadequately described. While the authors mention the use of "convenience, non-probability sampling technique" through WhatsApp groups and student IDs, important details are missing, such as: response rate calculations; justification for the final sample size of 628 participants; description of the total accessible population and the selection process for participating institutions. From my perspective, the statistical analysis of this manuscript reveals both strengths and areas for improvement. The authors made appropriate use of confirmatory factor analysis and demonstrated strong internal consistency through reliability coefficients, with an impressive omega coefficient of 0.927 and Cronbach's alpha of 0.895. Factor loadings are clearly presented, providing good transparency into the structure of the instrument. However, the model specification lacks sufficient detail to allow replication, and the authors have not explored alternative factor structures that might better explain the data. In addition, the lack of measurement invariance tests limits our understanding of how the instrument performs across different subgroups (e.g. dental vs. medical, female vs. male). For me, the limitations section is particularly concerning in its brevity and omission of important considerations.The use of convenience sampling introduces potential biases that should be acknowledged and discussed. The self-reported nature of the data and the online collection method using Google Forms present additional limitations that warrant discussion. The lack of test-retest reliability assessment leaves questions about the temporal stability of the instrument unanswered. Concerns about geographical generalisability and the lack of discipline specific analyses should also be addressed. Finally, please focus on the discussion section. It needs to be significantly expanded, particularly in terms of addressing limitations, practical implications and future research directions. In conclusion, I believe that this research demonstrates good methodological rigour and provides valuable insights for medical educators seeking to evaluate clinical microlearning environments, but taking into account the concerns presented above, I suggest the major revision. Reviewer #2: The article is well-written, and the rationale behind the selection and use of statistical analysis is well-understood, as well as the results discussed in a very good way. The following are some points I would like to discuss with the team of authorship. - Please add HEMLEM abbriviation in abstract. that will make it more clearer rather than to find it in the 3rd page. -It appears that more descriptive analysis was conducted than what is mentioned according to Lines 175-186. Recommendation: Consider presenting these analyses either in a table format (xtable) or as an appendix to provide a clearer and more comprehensive view.. - Question: To what extent do you believe the online tool and Google Form can ensure equal opportunity for participation? Given concerns about internet access for students, how can we ensure there is no duplication (i.e., the same participant filling out the form multiple times) or students outside the criteria completing the form? Recommendation: It would be beneficial to discuss these limitations in the article to address potential concerns regarding the validity of the data collected. - Correction: The document labeled as "APPENDIX 5" is referred to as "APPENDIX 4" within the document. Please correct this to maintain consistency. Thank you very much for providing a well-written paper. I wish all the authors all the best. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: Yes: Mohamed A. Abdelbaqy ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Dear Dr. Sethi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jul 19 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Fatemeh Farshad Guest Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #3: (No Response) Reviewer #4: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #3: Yes Reviewer #4: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #3: I Don't Know Reviewer #4: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #3: Yes Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #3: Yes Reviewer #4: Yes ********** Reviewer #3: I do recommend authors to explain more clear about methodology especially phases of the study. also please extend conclusions of the study more functional and applied. Reviewer #4: Manuscript Title: Contextual validation of HEMLEM tool used for measuring clinical micro-learning environments Comments to the Author: This manuscript presents a well-executed, contextually grounded validation study of the HEMLEM 2.0 instrument for assessing clinical micro-learning environments in Pakistani medical and dental education. The study fills an important gap in the literature on culturally appropriate psychometric tools, particularly in low- and middle-income countries (LMICs). The authors have made substantial revisions that greatly improve the clarity, statistical transparency, and overall rigor of the study. Major Strengths: Strong methodological rigor using Confirmatory Factor Analysis (CFA), bifactor modeling, and measurement invariance testing across gender and discipline. • Inclusion of both Cronbach’s alpha and McDonald’s omega enhances the psychometric robustness of the findings. • Cultural adaptation (e.g., changing “placement” to “department”) is thoughtful and contextually appropriate. • Sampling from all four provinces and inclusion of both MBBS and BDS students increases generalisability within the Pakistani context. • Measurement invariance testing supports the tool’s cross-group validity. Areas for Improvement: 1. Test-Retest Reliability • While the internal consistency is well established, the omission of test-retest reliability is a limitation. This should be more prominently acknowledged in both the discussion and limitations sections, along with recommendations for future research to assess the temporal stability of the tool. 2. Subscale Definitions • The conceptual distinction between subscales—particularly “Autonomy” and “Teaching Quality”—is not sufficiently clear. Further explanation or reconsideration of the naming conventions would improve interpretability. 3. Linguistic and Cultural Sensitivity • Although English is the medium of instruction, the paper would benefit from deeper reflection on potential language comprehension barriers. Consider elaborating on whether any support (e.g., bilingual review or pilot testing in native languages) was explored. 4. Practical Implications • The discussion of how the validated HEMLEM 2.0 can be used for curriculum reform, rotation evaluation, and faculty development is promising but somewhat general. Including specific use-case examples (e.g., use in short rural clinical rotations or quality assurance dashboards) would enhance the applied relevance of the work. 5. Data Transparency • While summary data are provided, uploading anonymized raw data and analysis code (e.g., R syntax) to a public repository such as OSF or Figshare would further strengthen transparency and reproducibility. Recommendation: Minor Revision ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #3: Yes: Mohammad H Yarmohammadian Reviewer #4: Yes: Mohammad Aminul Islam ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org
|
| Revision 2 |
|
Dear Dr. Sethi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Sep 20 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Fatemeh Farshad Guest Editor PLOS ONE Journal Requirements: 1. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #4: All comments have been addressed Reviewer #5: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #4: Yes Reviewer #5: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #4: Yes Reviewer #5: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #4: Yes Reviewer #5: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #4: Yes Reviewer #5: (No Response) ********** Reviewer #4: Dear Authors, Thank you for your diligent work in revising the manuscript, "Contextual validation of HEMLEM tool used for measuring clinical micro-learning environments." Your comprehensive responses to the reviewers' comments have significantly enhanced the quality, clarity, and transparency of your work. Here's a summary of how effectively you've addressed the feedback: Clarity of Methodology and Applied Conclusions (Reviewer 3, Comment 1): You have successfully clarified the distinct phases of your study in the "Materials & Methods" section, making the methodological approach much easier to follow. Furthermore, the expanded "Conclusion" section now provides more practical and functional implications of the HEMLEM 2.0 tool, which is a valuable addition. Addressing Test-Retest Reliability (Reviewer 4, Comment 1): You have appropriately acknowledged the omission of test-retest reliability as a limitation in the Discussion and have clearly recommended future longitudinal studies to assess the tool's temporal stability. This demonstrates a thorough understanding of the psychometric properties required for instrument validation. Clarifying Subscale Definitions (Reviewer 4, Comment 2): Your detailed explanation differentiating "Autonomy" and "Teaching Quality" in the "Preliminary Psychometric Testing and Exploratory Analysis" subsection is very helpful. This clarification significantly improves the interpretability of these conceptually distinct subscales. Enhancing Linguistic and Cultural Sensitivity (Reviewer 4, Comment 3): The added description of cognitive interviews, where you explicitly explored potential language comprehension barriers with students, addresses this point effectively. Your commitment to ensuring linguistic accessibility and cultural relevance through multiple rounds of student feedback is commendable. Providing Specific Practical Implications (Reviewer 4, Comment 4): You have successfully incorporated more concrete use-case examples for the HEMLEM 2.0 tool in the "Discussion" section. The suggestions for integrating it into digital dashboards, using it in rural clinical placements, and informing faculty development initiatives make the applied relevance of your work much clearer. Improving Data Transparency (Reviewer 4, Comment 5): Your decision to make the anonymized dataset publicly available on Zenodo, along with the direct link in the "Data Availability Statement," is an excellent step. This significantly boosts the transparency and reproducibility of your research, aligning with best practices in scientific publishing. Overall, your revisions have substantially strengthened the manuscript. The added details, clarifications, and increased transparency reflect a thorough and thoughtful engagement with the reviewers' feedback. Reviewer #5: The manuscript presents a well-conducted study on the cultural adaptation and validation of the HEMLEM tool for assessing clinical micro-learning environments in Pakistani medical and dental schools. The study addresses an important gap in the literature by providing a psychometrically robust tool tailored to a specific cultural context. The methodology is rigorous, and the results are clearly presented. However, there are several methodological and statistical concerns that need to be addressed to strengthen the validity and reliability of the findings. Major Comments Factor Analysis Methodology The manuscript currently uses Principal Component Analysis (PCA) with Varimax rotation. However, given the presence of cross-loadings in Table 3, Promax rotation (an oblique rotation method) would be more appropriate. Oblique rotations are better suited when factors are expected to correlate, which is likely the case here (e.g., "Supervision," "Autonomy," and "Atmosphere" may not be entirely independent). Recommendation: Re-run the factor analysis using Exploratory Factor Analysis (EFA) with Promax rotation and report the updated eigenvalues, percentages of variance explained, and factor loadings. Cite and follow the approach outlined in the referenced article (https://npt.tums.ac.ir/index.php/npt/article/view/2920) to ensure methodological rigor. Construct Validity The manuscript lacks a clear justification for the choice of PCA over EFA. PCA is a data reduction technique, while EFA is designed to uncover latent constructs. For scale validation, EFA is more appropriate. Recommendation: Replace PCA with EFA and explicitly state the rationale for this choice. Discuss the implications of using EFA for construct validity, referencing the limitations and advantages as outlined in the suggested article. Measurement Invariance The multi-group CFA results are well-presented, but the manuscript could benefit from a more detailed discussion of the implications of full measurement invariance across gender and discipline. How does this invariance support the tool's applicability in diverse settings? Recommendation: Expand the discussion to address the practical significance of measurement invariance, particularly for future cross-cultural applications. Test-Retest Reliability The absence of test-retest reliability is a notable limitation. While this is acknowledged in the discussion, the manuscript would benefit from a more detailed plan for future studies to address this gap. Recommendation: Propose a specific timeline and methodology for assessing test-retest reliability in future research (e.g., administering the tool to the same cohort at two time points during their clinical rotations). Terminology and Clarity The distinction between "Autonomy" and "Teaching Quality" is somewhat unclear. The manuscript provides a post-hoc explanation, but this could be strengthened by clearer definitions during the tool's development phase. Recommendation: Revise the definitions of these subscales in the Methods section to ensure clarity and avoid conceptual overlap. Minor Comments Sample Size Justification While the sample size is adequate, the manuscript could better justify the choice of 628 participants by referencing contemporary guidelines for factor analysis (e.g., minimum sample size relative to the number of items and expected factor structure). Recommendation: Cite recent literature on sample size requirements for EFA/CFA (e.g., Gunawan et al., 2021) to strengthen the methodology section. Data Transparency The data availability statement is commendable, but the manuscript could provide more details about the anonymization process and any restrictions on data access. Recommendation: Briefly describe the steps taken to anonymize the data and ensure participant confidentiality. Language and Grammar The manuscript is well-written, but there are minor grammatical errors (e.g., "generalizability" vs. "generalisability"). Ensure consistency in spelling (American vs. British English). Recommendation: Perform a thorough proofread to correct minor errors and ensure consistency. Statistical and Methodological Strengths The use of both Cronbach’s alpha and McDonald’s omega to assess reliability is a strength, as it addresses limitations of alpha alone. The inclusion of measurement invariance testing across gender and discipline adds robustness to the findings. The cultural adaptation process (e.g., replacing "placement" with "department") is well-executed and contextually appropriate. Suggested Decision Minor Revisions The manuscript is methodologically sound and makes a valuable contribution to the field. The requested revisions are primarily technical (e.g., re-running factor analyses with EFA/Promax rotation) and clarificatory (e.g., expanding definitions and justifications). Once these issues are addressed, the manuscript will be suitable for publication. Additional Recommendations for Future Work Longitudinal Studies: Assess test-retest reliability and the tool's sensitivity to changes in the learning environment over time. Cross-Cultural Validation: Extend validation to other LMICs to further establish the tool's global applicability. Integration with Performance Metrics: Explore correlations between HEMLEM scores and objective clinical performance measures (e.g., OSCE results). The authors are commended for their thorough work, and I look forward to seeing the revised manuscript. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #4: Yes: Mohammad Aminul Islam Reviewer #5: Yes: Hamid Sharif-Nia ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org
|
| Revision 3 |
|
Dear Dr. Sethi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Oct 18 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Fatemeh Farshad Guest Editor PLOS ONE Journal Requirements: If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments (if provided): Reviewer #5: This manuscript describes the cultural adaptation and psychometric validation of the HEMLEM tool for use in Pakistani medical and dental schools. The study addresses an important gap in the literature concerning the assessment of clinical micro-learning environments in a new cultural context. The work is generally well-structured, and the authors have undertaken a rigorous process for content validation. The use of advanced statistical techniques like Multi-Group Confirmatory Factor Analysis (MGCFA) for measurement invariance is a significant strength. However, several critical methodological and statistical concerns, particularly regarding the conflation of Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA), undermine the validity of the reported findings. These issues require substantial revision. Recommendation: Major Revision Detailed Comments for the Authors 1. Major Conceptual and Statistical Flaw: Conflation of EFA and CFA This is the most critical issue in the manuscript. The authors state that their primary analysis was CFA to test an a priori three-factor model. However, the methodology and results sections are heavily, and confusingly, reliant on EFA techniques. Page 21, Line 293-303; Page 22, Table 3: The presentation of a "Rotated Component Matrix" using Principal Component Analysis (PCA) with Varimax rotation is a purely exploratory technique. PCA is a data reduction method, not a latent variable modeling technique like EFA or CFA. Presenting this after claiming to have run a CFA creates a contradictory narrative. Justification in Response to Reviewers (Page 2-3): The authors' response to Reviewer 5, while defending the use of CFA, acknowledges that Table 3 displays "preliminary loadings for reader transparency." These exploratory values should not be the basis for assigning items to domains (as done on Lines 298-300: "based on the factor loading values, items 1 till 6 were assigned to domain 1..."). In a true CFA, the factor structure (which items load on which factor) is pre-specified based on theory or prior EFA, not determined from the current dataset. Recommendation: The authors must clarify their analytical strategy. If an EFA/PCA was indeed conducted on the Pakistani dataset to explore the factor structure, this must be explicitly stated in the methods before the CFA is introduced. The results of the EFA should be presented first, followed by the CFA which tests the structure suggested by the EFA (or the original theoretical structure). The current presentation suggests a circular analysis where the same data is used to explore and then confirm the structure, which is methodologically unsound. If the CFA was truly theory-driven and confirmatory, the EFA/PCA results (Table 3) should be removed entirely, as they are redundant and misleading. 2. Sample Size Justification Page 17, Lines 201-202: The justification ("10 participants per item") is outdated and not a robust methodological standard. While the final sample size of N=628 is excellent for a 12-item scale and provides high statistical power, the justification should be updated with contemporary references that consider desired factor loading magnitudes, number of factors, and anticipated communalities (e.g., Kyriazos, 2018; Wolf et al., 2013). Page 25, Lines 390-399: The sample size discussion in the manuscript is better, citing Comrey & Lee and others. This stronger justification should be moved to the Methods section. 3. Measurement Invariance Reporting Page 24, Lines 334-344: The MGCFA is a strength. However, the results are not sufficiently detailed. The authors must report the fit indices (χ², df, CFI, TLI, RMSEA, SRMR) for the configural, metric, and scalar models for both gender and discipline in the main text or a table, not just refer to supplemental materials. The critical values for evaluating invariance (ΔCFI < 0.01, ΔRMSEA < 0.015) should be stated in the Methods section. 4. Reliability Reporting Page 24, Line 346-347: Reporting both Cronbach's alpha and McDonald's omega is a best practice. However, the value of "composite reliability" (0.766) is mentioned without a clear definition. It should be specified that this is likely the Maximal Reliability for the general factor (H) if referring to the bifactor model, or the construct reliability for the specific factors. The calculation formula (e.g., Raykov's rho) should be referenced. 5. Terminology and Clarity Page 22, Lines 304-312: The distinction between "Autonomy" and "Teaching Quality" is helpful. However, the manuscript later reverts to the original three-factor model labels ("Supervision, Autonomy, Atmosphere"). There is an inconsistency between the two-domain structure mentioned in the EFA section (Staff Attitudes/Teaching Quality) and the three-domain CFA structure. This discrepancy must be explicitly addressed and resolved. The labels used in the final model must be consistent throughout the paper. 6. Data Analysis Description Page 18, Lines 233-242; Page 24, Lines 334-344: The description of MGCFA is good. However, the software mention on Page 24, Line 245 (R with Lavaan) contradicts the earlier statement on Page 18, Line 213 (IBM SPSS AMOS). This must be corrected for consistency. Page 18, Line 214: The assessment of normality via skewness and kurtosis is appropriate. It would be beneficial to state the specific cut-off values used (e.g., |skewness| < 2, |kurtosis| < 7). 7. Limitations Page 27-28, Lines 480-497: The limitations section is comprehensive and well-written. It appropriately acknowledges the sampling method, lack of test-retest reliability, and potential for bias. Summary of Required Revisions Clarify the Factor Analysis Sequence: Decide and clearly report whether an EFA was conducted. If yes, present EFA results first, then use them to inform the CFA model. If the study was purely confirmatory, remove the EFA/PCA results (Table 3) and all references to using the current data to assign items to factors. Resolve Terminology Inconsistencies: Ensure the names of the factors and subscales are consistent between the EFA (if kept) and CFA sections and throughout the manuscript. Provide Detailed MGCFA Results: Report full fit indices for all invariance models in the main text. Improve Sample Size Justification: Replace the "10 per item" rule with a more contemporary justification in the Methods section. Correct Software Inconsistencies: Ensure the software used for each analysis is correctly and consistently reported. Minor Proofreading: Address minor grammatical errors and ensure consistency in spelling (e.g., "generalizability" vs. "generalisability"). Conclusion The authors have undertaken a valuable study with considerable methodological strengths, including a strong cultural adaptation process, a large sample, and the application of advanced statistical techniques like MGCFA. However, the fundamental confusion between exploratory and confirmatory analysis paradigms is a major flaw that affects the interpretation and validity of the core findings. Addressing this central issue is paramount. Once these major revisions are complete, this manuscript has the potential to be a significant contribution to the field of health professions education. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #5: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #5: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #5: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #5: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #5: (No Response) ********** Reviewer #5: This manuscript describes the cultural adaptation and psychometric validation of the HEMLEM tool for use in Pakistani medical and dental schools. The study addresses an important gap in the literature concerning the assessment of clinical micro-learning environments in a new cultural context. The work is generally well-structured, and the authors have undertaken a rigorous process for content validation. The use of advanced statistical techniques like Multi-Group Confirmatory Factor Analysis (MGCFA) for measurement invariance is a significant strength. However, several critical methodological and statistical concerns, particularly regarding the conflation of Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA), undermine the validity of the reported findings. These issues require substantial revision. Recommendation: Major Revision Detailed Comments for the Authors 1. Major Conceptual and Statistical Flaw: Conflation of EFA and CFA This is the most critical issue in the manuscript. The authors state that their primary analysis was CFA to test an a priori three-factor model. However, the methodology and results sections are heavily, and confusingly, reliant on EFA techniques. Page 21, Line 293-303; Page 22, Table 3: The presentation of a "Rotated Component Matrix" using Principal Component Analysis (PCA) with Varimax rotation is a purely exploratory technique. PCA is a data reduction method, not a latent variable modeling technique like EFA or CFA. Presenting this after claiming to have run a CFA creates a contradictory narrative. Justification in Response to Reviewers (Page 2-3): The authors' response to Reviewer 5, while defending the use of CFA, acknowledges that Table 3 displays "preliminary loadings for reader transparency." These exploratory values should not be the basis for assigning items to domains (as done on Lines 298-300: "based on the factor loading values, items 1 till 6 were assigned to domain 1..."). In a true CFA, the factor structure (which items load on which factor) is pre-specified based on theory or prior EFA, not determined from the current dataset. Recommendation: The authors must clarify their analytical strategy. If an EFA/PCA was indeed conducted on the Pakistani dataset to explore the factor structure, this must be explicitly stated in the methods before the CFA is introduced. The results of the EFA should be presented first, followed by the CFA which tests the structure suggested by the EFA (or the original theoretical structure). The current presentation suggests a circular analysis where the same data is used to explore and then confirm the structure, which is methodologically unsound. If the CFA was truly theory-driven and confirmatory, the EFA/PCA results (Table 3) should be removed entirely, as they are redundant and misleading. 2. Sample Size Justification Page 17, Lines 201-202: The justification ("10 participants per item") is outdated and not a robust methodological standard. While the final sample size of N=628 is excellent for a 12-item scale and provides high statistical power, the justification should be updated with contemporary references that consider desired factor loading magnitudes, number of factors, and anticipated communalities (e.g., Kyriazos, 2018; Wolf et al., 2013). Page 25, Lines 390-399: The sample size discussion in the manuscript is better, citing Comrey & Lee and others. This stronger justification should be moved to the Methods section. 3. Measurement Invariance Reporting Page 24, Lines 334-344: The MGCFA is a strength. However, the results are not sufficiently detailed. The authors must report the fit indices (χ², df, CFI, TLI, RMSEA, SRMR) for the configural, metric, and scalar models for both gender and discipline in the main text or a table, not just refer to supplemental materials. The critical values for evaluating invariance (ΔCFI < 0.01, ΔRMSEA < 0.015) should be stated in the Methods section. 4. Reliability Reporting Page 24, Line 346-347: Reporting both Cronbach's alpha and McDonald's omega is a best practice. However, the value of "composite reliability" (0.766) is mentioned without a clear definition. It should be specified that this is likely the Maximal Reliability for the general factor (H) if referring to the bifactor model, or the construct reliability for the specific factors. The calculation formula (e.g., Raykov's rho) should be referenced. 5. Terminology and Clarity Page 22, Lines 304-312: The distinction between "Autonomy" and "Teaching Quality" is helpful. However, the manuscript later reverts to the original three-factor model labels ("Supervision, Autonomy, Atmosphere"). There is an inconsistency between the two-domain structure mentioned in the EFA section (Staff Attitudes/Teaching Quality) and the three-domain CFA structure. This discrepancy must be explicitly addressed and resolved. The labels used in the final model must be consistent throughout the paper. 6. Data Analysis Description Page 18, Lines 233-242; Page 24, Lines 334-344: The description of MGCFA is good. However, the software mention on Page 24, Line 245 (R with Lavaan) contradicts the earlier statement on Page 18, Line 213 (IBM SPSS AMOS). This must be corrected for consistency. Page 18, Line 214: The assessment of normality via skewness and kurtosis is appropriate. It would be beneficial to state the specific cut-off values used (e.g., |skewness| < 2, |kurtosis| < 7). 7. Limitations Page 27-28, Lines 480-497: The limitations section is comprehensive and well-written. It appropriately acknowledges the sampling method, lack of test-retest reliability, and potential for bias. Summary of Required Revisions Clarify the Factor Analysis Sequence: Decide and clearly report whether an EFA was conducted. If yes, present EFA results first, then use them to inform the CFA model. If the study was purely confirmatory, remove the EFA/PCA results (Table 3) and all references to using the current data to assign items to factors. Resolve Terminology Inconsistencies: Ensure the names of the factors and subscales are consistent between the EFA (if kept) and CFA sections and throughout the manuscript. Provide Detailed MGCFA Results: Report full fit indices for all invariance models in the main text. Improve Sample Size Justification: Replace the "10 per item" rule with a more contemporary justification in the Methods section. Correct Software Inconsistencies: Ensure the software used for each analysis is correctly and consistently reported. Minor Proofreading: Address minor grammatical errors and ensure consistency in spelling (e.g., "generalizability" vs. "generalisability"). Conclusion The authors have undertaken a valuable study with considerable methodological strengths, including a strong cultural adaptation process, a large sample, and the application of advanced statistical techniques like MGCFA. However, the fundamental confusion between exploratory and confirmatory analysis paradigms is a major flaw that affects the interpretation and validity of the core findings. Addressing this central issue is paramount. Once these major revisions are complete, this manuscript has the potential to be a significant contribution to the field of health professions education. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #5: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org |
| Revision 4 |
|
Dear Dr. Sethi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Dec 11 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Fatemeh Farshad Guest Editor PLOS ONE Journal Requirements: 1. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments: Due to the reviewer's comment It is recommended to revise by an English professional editor . [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #6: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #6: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously?-->?> Reviewer #6: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #6: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #6: No ********** Reviewer #6: It is a good topic . After 4 rounds and review , it has been better . It is recommended to revise by an English professional editor . ( Minor revision). ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #6: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation. NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications. |
| Revision 5 |
|
Contextual validation of HEMLEM tool used for measuring clinical micro-learning environments PONE-D-24-54864R5 Dear Dr. Sethi, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support . If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Fatemeh Farshad Guest Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-24-54864R5 PLOS ONE Dear Dr. Sethi, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Fatemeh Farshad Guest Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .