Peer Review History

Original SubmissionMarch 24, 2025
Decision Letter - Tien-Dung Cao, Editor

PONE-D-25-15824Achieving consistency in FedSAM using local adaptive distillation on sports image classificationPLOS ONE

Dear Dr. zhen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Sep 01 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Tien-Dung Cao, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match.

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

3. Thank you for stating the following financial disclosure:

[Sichuan Science and Technology Program           2025ZNSFSC1498            Not applicable].

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

If this statement is not correct you must amend it as needed.

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

4. Thank you for stating the following in the Acknowledgments Section of your manuscript:

[This work is supported by Sichuan Science and Technology Program with Grant ID 2025ZNSFSC1498.]

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

[The author(s) received no specific funding for this work.]

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

5. We note that your Data Availability Statement is currently as follows: [All relevant data are within the manuscript and its Supporting Information files.]

Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition).

For example, authors should submit the following data:

- The values behind the means, standard deviations and other measures reported;

- The values used to build graphs;

- The points extracted from images for analysis.

Authors do not need to submit their entire data set if only a portion of the data was used in the reported study.

If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories.

If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access.

6. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager.

7. We note that Figures 1 and 2 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

1. You may seek permission from the original copyright holder of Figures 1 and 2 to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

2. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors introduce a federated learning (FL) paradigm called A-FedSAM, which employs adaptive local distillation to ensure consistent smoothing between local and global models. Using sports image classification datasets, they demonstrate that A-FedSAM reaches the target accuracy while reducing communication overhead.

While the paper presents promising experimental results, it lacks a solid theoretical foundation, as no convergence theorem is stated or proven. I suggest including a formal convergence analysis to enhance the theoretical rigor. Moreover, the experimental evaluation is limited to sports image classification, which limits the generalizability of the results. To strengthen the empirical validation, I recommend incorporating comparisons on widely used benchmark datasets such as CIFAR-10, CIFAR-100, and TinyImageNet, as demonstrated in prior work like FEDSPEED [22].

Reviewer #2: This paper introduces A-FedSAM, a novel federated learning (FL) paradigm tailored for sports image classification under non-IID data conditions. The method builds upon Sharpness-Aware Minimization (SAM) by incorporating an adaptive local knowledge distillation mechanism. By treating the global model as a teacher, the approach aligns local gradients with the global objective, addressing the "smoothness inconsistency" challenge in FedSAM. The authors provide rigorous experiments using SPORT1 and SPORT2 datasets and demonstrate performance improvements in accuracy, communication efficiency, and convergence speed. The methodology is sound, well-motivated, and clearly presented, and the ablation study strengthens the validity of the contributions. However, a few technical, writing, and contextual aspects need refinement, which are outlined below.

1- The definition and theoretical framing of “smoothness inconsistency” could benefit from mathematical formalization or clearer empirical demonstration beyond intuitive explanation and illustrations. How is this quantitatively defined or detected?

2- The adaptive distillation term is added directly to the SAM objective. However, this merges two different types of losses (KL divergence and cross-entropy/smoothness-based optimization). A short justification for their direct additive combination—especially how gradient magnitudes are balanced—should be provided.

3- The use of an exponentially weighted moving average (EWMA) to deal with the unreliability of early-stage global models is introduced, but the mechanism’s parameters (e.g., λ in α(t)) are only briefly described. More empirical or theoretical guidance for choosing λ would be helpful.

4- While the communication overhead is discussed in depth, the computational overhead introduced by SAM and distillation (e.g., forward pass for teacher, KL loss, gradient ascent for perturbation) is not compared across baselines. Including this would present a more complete view of efficiency.

5- While sports image classification is the focus, the methodology is general. The paper would benefit from a brief discussion or experiment that confirms the generalizability of A-FedSAM to other FL domains or modalities (e.g., medical imaging, text).

6- The Results/Experimental Analysis section is overly repetitive when describing performance gains across various splits. For instance, similar phrasing is repeated across SPORT1 and SPORT2. This could be condensed into comparative bullet points or a synthesized performance table with observations grouped by dataset.

7- The paper overlooks recent work on Federated Proximal Optimization methods (e.g., FedProx) and gradient clipping or noise-based regularization approaches in handling non-IID data. For example:

a) Breaking Interprovincial Data Silos: How Federated Learning Can Unlock Canada's Public Health Potential

b) A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks

c) Hybrid privacy preserving federated learning against irregular users in next-generation Internet of Things

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Abbas Yazdinejad

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Dear Editors and Reviewers,

We would like to thank you for your efforts to provide us with constructive suggestions. We have carefully revised the manuscript according to your comments and suggestions. Below we detail the changes for the manuscript based on each of these comments (the order of responses is in correspondence with the order of comments).

Response to Academic Editor: We thank the Academic Editor for carefully reviewing our manuscript and providing clear guidance to ensure full compliance with PLOS ONE's publication standards. Below, we provide a point-by-point response addressing each of the listed requirements:

1. Manuscript Style and Formatting

We confirm that the revised manuscript has been carefully checked and revised to fully comply with the PLOS ONE formatting requirements, as outlined in the provided templates for the main body and title/author sections.

2. Funding Information Consistency

We have removed all previously stated funding information from the manuscript, including from the Acknowledgments section, in order to ensure full compliance with PLOS ONE’s formatting policies.

3. Role of Funder Statement

As no external funding was received for this study, the funder role statement is no longer applicable and has been removed accordingly.

4. Acknowledgment and Funding Section Separation

We have removed all funding-related content from the Acknowledgments section. The funding source (Sichuan Science and Technology Program, Grant ID: 2025ZNSFSC1498) now appears exclusively in the Funding Statement section in accordance with PLOS ONE policy.

5. Data Availability Statement and Raw Data Confirmation

We confirm that the manuscript includes all data required to replicate the results. This includes:

Values underlying accuracy metrics, standard deviations, and variance values;

Data points used in figures and tables;

Experimental settings, model parameters, and dataset splits.

The datasets used in our experiments are publicly available and have been properly cited with URLs or identifiers in the corresponding references. Therefore, readers can access all raw data required for reproduction through the cited benchmark datasets. All other relevant materials are included in the manuscript or as Supporting Information files, in full compliance with PLOS ONE's minimal data set policy.

6. ORCID iD Validation

The ORCID iD for the corresponding author Jie Wu has been validated in the Editorial Manager system as requested. The ORCID iD is: https://orcid.org/0009-0003-1418-2247.

7. Copyrighted Figures (Figures 1 and 2)

Figures 1 and 2 have been retained in the manuscript; however, we have carefully revised the content to remove any elements that may pose potential copyright concerns. All visual components in the updated figures now comply fully with PLOS ONE's CC BY 4.0 licensing policy.

Please let us know if any further clarification or modification is needed.

Sincerely,

The Authors

Reviewer 1: The authors introduce a federated learning (FL) paradigm called A-FedSAM, which employs adaptive local distillation to ensure consistent smoothing between local and global models. Using sports image classification datasets, they demonstrate that A-FedSAM reaches the target accuracy while reducing communication overhead.

While the paper presents promising experimental results, it lacks a solid theoretical foundation, as no convergence theorem is stated or proven. I suggest including a formal convergence analysis to enhance the theoretical rigor. Moreover, the experimental evaluation is limited to sports image classification, which limits the generalizability of the results. To strengthen the empirical validation, I recommend incorporating comparisons on widely used benchmark datasets such as CIFAR-10, CIFAR-100, and TinyImageNet, as demonstrated in prior work like FEDSPEED [22].

Response: We really appreciate you for highlighting these critical concerns. We have carefully addressed both the theoretical and experimental limitations noted, and made substantial improvements in the revised version. Specifically, we have added a formal convergence analysis under mild assumptions, offering a theoretical guarantee of A-FedSAM’s training stability. Additionally, we have significantly expanded the empirical evaluation by incorporating diverse and widely-used benchmark datasets, including CIFAR-10, CIFAR-100, Tiny-ImageNet, and AG-News. These enhancements help establish both the theoretical rigor and the general applicability of our method. We will elaborate on each point in detail below:

1. Lack of theoretical foundation: convergence analysis is missing: While the paper presents promising experimental results, it lacks a solid theoretical foundation, as no convergence theorem is stated or proven. I suggest including a formal convergence analysis to enhance the theoretical rigor.

Response: To address the first point, we have incorporated a formal convergence analysis into the main body of the revised manuscript (see Section “Theoretical Analysis”). Specifically, we establish a convergence theorem (Theorem 1) under standard assumptions, including L-smoothness, bounded stochastic gradients, and client dissimilarity. Our analysis shows that A-FedSAM converges in expectation to a stationary point with a convergence rate of O(1/T), accompanied by a bounded residual term \Phi that reflects variance and perturbation effects. The adaptive distillation term is handled through a proximal approximation, and our proof is constructed upon the theoretical framework used in prior works such as FedGKD and FedSpeed, enabling the analysis to capture the joint behavior of SAM perturbations and knowledge distillation. This new section provides a solid theoretical foundation for A-FedSAM and enhances the methodological rigor of the paper.

2. Limited evaluation scope: only on sports image classification: The experimental evaluation is limited to sports image classification, which limits the generalizability of the results. To strengthen the empirical validation, I recommend incorporating comparisons on widely used benchmark datasets such as CIFAR-10, CIFAR-100, and TinyImageNet, as demonstrated in prior work like FEDSPEED [22].

Response: In response to the second suggestion, we have significantly expanded the experimental section to demonstrate the generalizability of A-FedSAM across diverse tasks. In addition to SPORT1 and SPORT2, we now evaluate our method on four widely-used benchmark datasets: CIFAR-10, CIFAR-100, TinyImageNet, and AG-News. These benchmarks cover a range of modalities—including coarse- and fine-grained natural images and text classification—and enable us to assess the robustness of A-FedSAM across different data distributions and domains. Experimental results, summarized in Table 3, show that A-FedSAM consistently outperforms state-of-the-art baselines under various non-IID partitions (Dirichlet and pathological) and low client participation settings. The performance gains are especially significant on challenging settings such as CIFAR-100 and TinyImageNet, which confirms that our method is not limited to sports-related image classification.

We have also updated the following sections to reflect these additions: subsection*{Datasets}, subsection*{Data Partitioning}, and section*{Model}, which now describe the new benchmarks and their network settings. This extension strengthens the empirical credibility of A-FedSAM and affirms its applicability to general federated learning tasks beyond the initial motivation in the sports domain.

Once again, we appreciate the reviewer’s insightful suggestions, which helped us improve the completeness and impact of our work.

We would like to express our sincere gratitude for your insightful comments and suggestions. Your emphasis on both theoretical rigor and empirical generalizability has significantly improved the quality and scope of our work. We hope that the added convergence analysis and expanded benchmark evaluations satisfactorily address your concerns and demonstrate the robustness and broad applicability of A-FedSAM.

Reviewer 2

1. Clarifying the theoretical framing of “smoothness inconsistency.”: The definition and theoretical framing of "smoothness inconsistency" could benefit from mathematical formalization or clearer empirical demonstration beyond intuitive explanation and illustrations. How is this quantitatively defined or detected?

Response: Thank you for pointing this out. In the revised version, we provide a more precise definition of "smoothness inconsistency" as the discrepancy between sharpness-aware gradients computed locally (via perturbation) and the direction of global model optimization. This inconsistency arises from the misalignment between local loss landscapes and the global objective under non-IID data.

To support this, we introduce a convergence analysis in Section {Theoretical Analysis}, which shows that our adaptive local distillation term helps regularize local updates towards the global direction. Mathematically, we model this effect via a proximal quadratic regularizer (following FedGKD) that penalizes divergence from the global model. Empirically, we demonstrate reduced variance in gradient norms and faster convergence curves in Figures 7–11, indicating improved alignment and optimization stability. These together offer both theoretical and empirical grounding for the notion of smoothness inconsistency.

2. Justifying the additive combination of KL divergence and SAM loss: The adaptive distillation term is added directly to the SAM objective. However, this merges two different types of losses (KL divergence and cross-entropy/smoothness-based optimization). A short justification for their direct additive combination—especially how gradient magnitudes are balanced—should be provided.

Response: This is an excellent point. In the updated manuscript (see Section Adaptive Local Distillation), we provide a clearer rationale for combining the two objectives: the SAM term promotes flat minima and generalization by perturbation-based sharpness minimization, while the KL divergence term aligns the output distributions between the global and local models to reduce client drift.

Although the two components originate from different motivations, they both operate in the function space and are compatible in terms of gradient-based optimization. To balance their relative influence, we apply a dynamic distillation coefficient \alpha(t) = 1 - \exp(-\lambda t), which scales the KL term gradually over time. Moreover, we perform a detailed sensitivity analysis on \lambda, the perturbation radius \rho, and the distillation temperature T (see Section Parameter Sensitivity, Tables 3-5), showing empirically that moderate settings (e.g., \lambda = 1.0) result in stable convergence and improved accuracy. This justifies the additive combination and demonstrates that A-FedSAM maintains proper gradient scaling between the two terms.

3. More guidance on choosing the \lambda parameter in \alpha(t): The use of an exponentially weighted moving average (EWMA) to deal with the unreliability of early-stage global models is introduced, but the mechanism's parameters (e.g., \lambda in \alpha(t)) are only briefly described. More empirical or theoretical guidance for choosing \lambda would be helpful.

Response: Thank you for the insightful comment. In the revised paper, we enhance the justification for the choice of \lambda in the distillation coefficient \alpha(t) = 1 - \exp(-\lambda t). Theoretically, a smaller \lambda leads to slower growth and thus a weaker influence of distillation in early rounds, which is desirable when the global model is still unstable. Larger $\lambda$ enforces faster regularization but may overfit to a noisy teacher.

To complement this, we now include an empirical sensitivity analysis of \lambda in Table 7 (Section {Parameter Sensitivity}). Results indicate that \lambda = 1.0 consistently offers the best performance across SPORT1 and SPORT2, while both smaller and larger values result in lower accuracy due to under- or over-constraining the local model. This empirical evidence offers clear guidance for tuning \lambda.

4. Quantifying the computational overhead of SAM and distillation: While the communication overhead is discussed in depth, the computational overhead introduced by SAM and distillation (e.g., forward pass for teacher, KL loss, gradient ascent for perturbation) is not compared across baselines. Including this would present a more complete view of efficiency.

Response: We agree this was an important omission. In the revised Section {Computation Overhead}, we discuss in detail the two sources of computation introduced by A-FedSAM: the extra forward/backward pass due to SAM perturbation and the forward pass plus KL computation from knowledge distillation (Table 4).

We analyze the per-step overhead and contrast it with standard FedAvg and FedSpeed. Moreover, we emphasize that these additions occur locally and do not impact communication costs. Empirically, we show that despite the added computation, A-FedSAM converges significantly faster (see convergence plots in Figures 7-11), reducing total runtime and energy in real-world training. Thus, the overall training efficiency improves, even accounting for the extra computations.

5. Confirming generalizability beyond sports image classification: While sports image classification is the focus, the methodology is general. The paper would benefit from a brief discussion or experiment that confirms the generalizability of A-FedSAM to other FL domains or modalities (e.g., medical imaging, text).

Response: Thank you for this valuable suggestion. We have expanded our experimental section (see Table 3) to include additional benchmarks spanning multiple modalities: CIFAR-10 (natural images), CIFAR-100 (fine-grained vision), TinyImageNet (complex multi-class vision), and AG-News (text classification). These additions demonstrate that A-FedSAM generalizes well beyond the sports domain.

Across all datasets, A-FedSAM consistently outperforms strong baselines under various non-IID and participation settings. This confirms that our method is not limited to sports classification and is broadly applicable to federated scenarios in both vision and NLP tasks.

6. Condensing repetitive analysis in the experimental section: The Results/Experimental Analysis section is overly repetitive when describing performance gains across various splits. For instance, similar phrasing is repeated across SPORT1 and SPORT2. This could be condensed into comparative bullet points or a synthesized performance table with observations grouped by dataset.

Response: We appreciate this observation and have revised the Results/Analysis section accordingly. Specifically, we split the evaluation into two concise sub-sections: one focused on SPORT datasets and the other on benchmark datasets. Each section synthesizes results across participation rates and data splits, removing redundant phrasing.

We replaced the paragraph-by-paragraph comparison with dataset-centered summaries, highlighting key trends and comparative insights. This improves readability and better emphasizes the advantages of A-FedSAM across tasks and settings.

7. Related work: FedProx, gradient clipping, and regularization methods: The paper overlooks recent work on Federated Proximal Optimization methods (e.g., FedProx) and gradient clipping or noise-based regularization approaches in handling non-IID data.

Response: Thank you for pointing this out. In the revised Related Work section, we have added discussions on:

FedProx [Li et al., MLSys 2020]: which constrains local updates with a proximal term to mitigate client drift. We note that our method introduces an adaptive distillation-based constraint that operates in the output space, rather than directly on parameters.

-Gradient clipping and noise regularization: We cite representative works such as "Breaking Interprovincial Data Silos: How Federated Learning Can Unlock Canada's Public Health Potential", "A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks" and "Hybrid privacy preserving federated learning against irregular users in next-generation

Attachments
Attachment
Submitted filename: Response to Reviewers.pdf
Decision Letter - Tien-Dung Cao, Editor

Achieving consistency in FedSAM using local adaptive distillation on sports image classification

PONE-D-25-15824R1

Dear Dr. Wu,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Tien-Dung Cao, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Thank you for submitting the revised manuscript. After carefully reviewing it myself and considering the reviewers' comments, I am pleased to inform you that it now meets the requirements for publication.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

Reviewer #1: Yes

**********

6. Review Comments to the Author

Reviewer #1: Thank you for the thorough revision of the manuscript. I am satisfied with the changes, and I support its acceptance.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

Reviewer #1: No

**********

Formally Accepted
Acceptance Letter - Tien-Dung Cao, Editor

PONE-D-25-15824R1

PLOS ONE

Dear Dr. Wu,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr Tien-Dung Cao

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .