Peer Review History
Original SubmissionMarch 23, 2020 |
---|
PONE-D-20-04898 It’s how you say it: systematic A/B testing of digital messaging cut hospital no-show rates PLOS ONE Dear Mrs. Berliner Senderey, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. We would appreciate receiving your revised manuscript by Jun 21 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript:
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Sreeram V. Ramagopalan Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Thank you for stating the following in the Competing Interests section: "The authors have declared that no competing interests exist." We note that one or more of the authors are employed by a commercial company: Kayma labs a. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. b. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. c. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests 3. Your ethics statement must appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please also ensure that your ethics statement is included in your manuscript, as the ethics section of your online submission will not be published alongside your manuscript. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The submitted manuscript has several strong features making it worthy of publication. The topic chosen has substantial impact on both outcomes and financing in Israel’s healthcare system, and likely in all national healthcare systems. The study population and dataset have high ecological validity, the intervention design seems thorough and well-controlled, and the analyses are appropriate for the general study objectives. Most of the line-specific comments below have minor or moderate impacts on the overall quality of the study design and analysis. The major challenge with the manuscript, as written, is that the conclusions extend well beyond what can be supported by a conservative interpretation of the data and analyses. The analyses show that varying the message content of appointment reminders can influence no-show and cancellation rates. The specific message alternatives drafted by the study authors are inspired by well-regarded and influential behavioral theories. However, the authors have offered no analyses that demonstrate whether the specific stimuli they drafted for this study are actually perceived by study participants (or people like the study participants) to have the intended affective and motivational impacts. The existing data support that different messages are more effective, but do not currently support hypotheses about _why_ they are effective. I hate to be the reviewer that requests more data and/or analysis, especially for an open access journal submission. However, there is a straightforward fix if you want to go beyond the conclusion that “different messages can influence no-show and cancellation rates”. You’d need to get independent ratings of the message content, separately from their effect on no-show and cancellation rates. Even post hoc, even with a convenience sample of Israelis, consistent Likert-scale ratings of the messages against statements like “This message makes me feel guilt” or “This message makes me feel peer pressure” could support the argument that the stimuli you drafted are good representations of the behavioral economic principles they’re supposed to represent. I can’t tell whether any of the authors are behavioral or social psychologists, but a relatively quick consultation with a research psychologist could yield a good set of rating questions to support the various source theories. If the authors do not wish to conduct additional analysis to show that the stimuli are rated as having the effect intended by these various theories, then a resubmitted manuscript should substantially scale back the interpretation of results. It would be appropriate, consistent with the often theory-free discipline of A/B testing, to note simply that different messages influence appointment behaviors, and imply that continued A/B testing could optimize appointment behaviors even if no underlying theory is applied. It would even be appropriate to speculate that the varied appointment effects are consistent with the source behavior theories, with a heavy “more research is needed” caveat as a limitation. But the current data do not offer adequate support that any of these behavioral theories is responsible for study effects, nor that one theory predicts stronger effects than another. Line-specific comments: • Abstract line 11 - appears to contain a typo “Clalit’sto”. Additional typos and grammar notes appear in subsequent lines, but my review was not exhaustive. Recommend that the resubmitted manuscript first get a thorough proofread by a native English speaker. • Text line 53 - “randomized” not “randomize” • Text line 61 - “suggests” not “suggest” • Text line 61 – “Behavioral economic theory” does correlate message contents and their “different motivational narratives” with varying impact, but “_testing_ different motivational narratives” is just a good practice encouraged in behavioral economics. If I’m understanding your sentence correctly, I think it is stronger with the word “testing” removed. • Text line 65 – Be careful making a value assertion here about “underuse” - how much behavioral economics use would be enough? How would we determine this? • Text line 72 - “affect” not “effect” • Text line 73 - “the learning health system” not “health learning system” • Text line 79 - “registerd” is misspelled • Text line 155 - The methods are not clearly described, but much of the source data for CCI appear to be diagnoses available in the system across each subject’s entire medical history with Clalit. Access to diagnoses is likely unequal across the cohort, based on patient age and duration of using this particular clinic system. This is a source of noise that may have weakened the likelihood of showing positive correlations with CCI in study results. Randomization would have balanced out the degree of “availability” bias that this would have introduced in CCI calculations between treatment arms, but the chance of detecting a CCI main effect would still be weakened throughout the population. • Text line 191 – It appears that this is multinomial testing, not serial A/B testing as discussed in marketing disciplines. The medical/health services research audience for your paper will not necessarily know what A/B testing is, so it should be defined more explicitly, or the market research term should be removed from this article. • Text line 209 - Text lines 301-302, later in the paper, clarify that the 64.6% statistic _only_ refers to click-through behavior. This line implies that it refers both to reading and to click-through. The number of messages that were read is likely larger than 64.6%, but is not measured or reported separately. Please clarify the language accordingly. • Text line 215 - Table 2 contains no statistical inference tests to support that “no significant differences were found”. It is not necessary to modify Table 2 to include tests if they were done and indeed not significant. If tests were done, a parenthetical phrase in this sentence would suffice (e.g., all p values > X). If tests were not done, please remove the “no significant differences were found” phrase. • Text line 272 - your analysis models do not support your ability to conclude that specific messages led to the “lowest” no-show rates or the “highest” advanced cancellation rates. This would require pairwise comparisons among the messages that reduced no-shows and increased cancellations. You can observe that specific messages had the greatest “nominal” differences from the control group, which acknowledges that “lowest” and “highest” are not supported by statistical inference testing. In reality, the pairwise ORs for no-shows and cancellations are quite comparable in magnitude among the five of your messages with significant differences from the control group. • Text line 279 - “substancial“ is misspelled • Text lines 278-281 - is your effect due to a “mere change in narrative” or specific “behavioral economics principles”? Your analyses don’t allow you to distinguish. The messages included in Supplement 1 would appear to a prudent layperson to match the principles listed in each row. But you’ve reported no independent message testing to confirm that the messages are perceived that way. This does not allow you to defend against any number of alternative explanations that have nothing to do with the specific behavioral economics principles. For example, is the difference because the effective messages have the longest word/character counts? Is this a Hawthorne effect, given that the control message had been in use for a while and most of the experimental messages were noticeable changes from the prior one? Is it because the effective messages provide a reason, any reason, to show up or cancel in advance (see Ellen Langer’s 1978 study on “placebic” information in persuasion)? • Text lines 285-286 - You’ve shown no data to confirm whether patients receiving these messages felt any ‘affect’ or any specific emotional response, so the current version of your study cannot be evaluated against specific frameworks such as MINDSPACE. • Text lines 306-308 - This is a minor point given the larger context of the study, but the multiple and varying number of appointments per patient in the sample leads to correlated error variance in your study dataset, which may have produced biased error estimates. If a study dataset allows some patients to count once, but others to count multiple times, a statistical analysis model that accounts for this correlation in error variances is often used. In your resubmission, it’s probably better to mention this as a limitation than to go back and apply a more sophisticated analysis model (e.g., genereralized estimating equations) that may not improve your precision. • Text lines 327-328 - it might make sense to acknowledge that a change like this might have additional unintended consequences besides improving visit attendance. For example, does using a guilt-based message encourage patient resentment that might reduce clinic or physician satisfaction ratings? • Text line 332 - “customize” instead of “costumize”? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Vernon F. Schabert, Ph.D. [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
It’s how you say it: Systematic A/B testing of digital messaging cut hospital no-show rates PONE-D-20-04898R1 Dear Dr. Berliner Senderey, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Sreeram V. Ramagopalan Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
Formally Accepted |
PONE-D-20-04898R1 It’s how you say it: Systematic A/B testing of digital messaging cut hospital no-show rates Dear Dr. Berliner Senderey: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Sreeram V. Ramagopalan Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .