Peer Review History
| Original SubmissionJuly 25, 2020 |
|---|
|
PONE-D-20-21281 The reciprocal relationships between social media self-control failure, mindfulness and wellbeing: A longitudinal study PLOS ONE Dear Dr. Du, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. I have now received two reviews of your manuscript from experts in the field. Both reviewers felt the research was valuable and addressed an important gap in the current literature on social media, self-control and well-being, especially with respect to its design and the Open Science approach that was taken in conducting the research. Although both reviewers noted that they enjoyed reading your paper, they also noted several areas that needed further attention and invested considerable time in providing extremely detailed and constructive comments, along with suggested references, aimed at improving the paper. For example, both reviewers suggest that the paper would benefit from greater elaboration of the theoretical rationale and mechanisms that underpin the proposed relationships among the key variables. Similarly, Reviewer 1 raises some important issues with respect to how your constructs were operationalised that have important implications for the assumption underlying the statistical modelling that as used. These are just some of the important issues that deserve close attention, and I will not repeat all of the reviewer’s recommendations for improvement here. I do believe that their concerns are substantial enough that the paper cannot be accepted in its present form. However, it may be possible that with careful attention to their comments, a substantially revised paper could be considered for publication. Given this, I would like to invite you to make substantial revisions in line with the reviewers’ comments and resubmit a revised version of the manuscript. Please include a cover letter detailing how you have dealt with each of the comments. Thank you for considering PLOS ONE as an outlet for your work and I look forward to receiving your revised manuscript. Please submit your revised manuscript by Dec 06 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Fuschia M. Sirois, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I applaud the authors for a great paper, which I enjoyed reading, and a methodologically sound study. I have only a few recommendations to improve your work. The most important one is that you should delve deeper into the theoretical connection of the constructs of interest (beyond only quoting empirical findings), thereby enriching your theoretical background and discussion/outlook. (1) Theoretical rationale a) Social media self-control failure (SMSCF) is an interesting construct. Nevertheless, we still know very little about it. First, I think you should make a stronger case for its relevance. Why is it important to investigate SMSCF? Why is it superior to other constructs (e.g., trait self-control, procrastination)? etc. b) Second, the dynamic and reciprocal relationships are at the core of your assumptions and modeling. Theoretically, this part definitively needs more elaboration. Your assumptions comprise important deviations from Slater’s (2007, 2015). For instance, in Slater’s (2007) model, the central variables are media use as well as cognitive and behavioral variables (e.g., attitudes, behavior, identity). Your model focuses on SMSCF (which already is a cognitive effect of media use), mindfulness, and well-being. As a minimum, I would have expected some words on how Slater’s reinforcing spirals transfers to your model. c) From a theoretical perspective, I’d also like to read more about the theoretical connections between SMSCF, mindfulness, and well-being. Although you provide empirical evidence, little is said in terms of why they should be related (e.g., how can SMSCF theoretically affect trait mindfulness). d) From my perspective, psychological well-being is misconceived. What you refer to is often understood as subjective or hedonic well-being, whereas psychological or eudaimonic well-being often relates to other theoretical concepts and operationalizations (Keyes et al., 2002; Martela & Sheldon, 2019; Ryff & Singer, 1998). Thus, the corresponding parts need clarification. e) I’m not fully convinced of your elaboration on stability and change. I think you need to flesh out the trait and state components of the constructs you’ve theorized and also operationalized. For instance, all the measures you’ve applied can be considered as capturing state and trait components. Thus, the random intercept cross-lagged panel model (RI-CLPM) is an appropriate way to account for this. But I’m not sure if the operationalizations are those of variables that are really susceptible to change. All of them seem to rather tap into trait-like or generalizable statements than assess a specific state. There is no particular time reference in your measures—neither in the item stems nor in the response options. An item such as “How often do you give in to a desire to use social media even though your social media use at that particular moment makes you delay other things you want or need to do?” with response options ranging from “never” to “always” doesn’t make clear if this measures a situational behavior or a more stable personality trait. As you’ve shown (Du et al., 2018), the scale has high test–retest reliability and thus seems to rather measure a trait variable. Similarly, this also holds true for the mindfulness and the life satisfaction measure. Subjective vitality could be more prone to change but this may totally depend on your instruction (e.g., rate how you feel today, the last week, the time since the last questionnaire etc.). Still, there could be a large portion of trait variance influencing the responses. In short, the role of trait and state components is appropriately accounted for in the RI-CLPM but neither discussed theoretically nor operationalized in a suitable way. (2) Methodological soundness Besides 1e), there’s little to moan with regard to methods, although it would have been great to inspect the analyses’ code and check for reproducability (see also 4b). a) A bit more methodologically sophisticated approaches would check the construct validity of the constructs using a confirmatory factor analysis and also test for measurement invariance across measurement occasions (Brown, 2015; Geiser et al., 2015). Perhaps not essential, but could be used to show distinctness of the constructs within and across waves and comparable meaning of the measures across the three waves. b) I’m not sure why Facebook use was measured. Moreover, in the study using the t1 data (Du et al., 2019), you reported the same items including the terms “social media” not Facebook. c) In general, I wondered why no control variables were included, at least in the unpreregistered exploratory analyses. Your own studies showed correlations with social media use, age, and gender. Wouldn’t it be useful to check their relation to the constructs of interest as well? (3) Discussion a) I’d like to see a more thorough discussion against the backdrop of recent findings (e.g., Liu et al., 2019; Orben et al., 2019; Stavrova & Denissen, 2020). Orben et al. (2019) is already briefly mentioned in some parts of the discussion. You could flesh out these sections. b) Moreover, the “social media” is a very broad term. Thus, self-control failures could refer to many different aspects of using social media features. On the one hand, given the results of Du et al. (2019), it could be interesting to discuss which features are theoretically positively related to well-being (e.g., social gratifications to satisfy belongingness or information gratifications to reduce uncertainty) but less to self-control failure, and vice versa. On the other hand, a more nuanced discussion on the potentially causal relationship between well-being and self-control failure would also help to gain insight into the relevant facets of well-being. The very broad and general Satisfaction with Life Scale (SWLS) is not necessarily directly linked to SMSCF. However, as SWLS is a cognitive indicator of subjective well-being, it could be that self-concept relevant aspects are mediating this relationship. This is just a fictitious example. As the theoretical background could be improved, so could be the discussion. More speculation about the mechanisms and conditions of the connections between SMSCF, mindfulness, life satisfaction, and vitality are necessary. c) Your limitation section starts fast-forwarding to the model constraints. Why are experimental longitudinal designs are badly needed? Why is it important to pay attention to actual and not only self-reported media use? Why is it useful to combine media use and experienced loss of self-control in a single measure? etc. These questions deserve more attention. (4) Open Science Besides preregistering the present study, the present paper does not address all of proposed Open Science practices (e.g., Dienlin et al., 2020). a. Preregistering: You’ve pregistered your study at aspredicted.org. This is great. While reading the preregistration, however, I thought that it could have been beneficial to split it in two preregistrations as it has clearly been your plan from the beginning to test and publish the findings concerning your first research goal (“predictors of social media self-control failure”) in a separate paper (Du et al., 2019). Please cite this paper also on p. 8, where you refer to this publication. Moreover, I wonder whether some of those constructs you’ve examined (and found to be associated with SMSCF!) in that paper could be valuable control variables at t1 (e.g., age, online vigilance) for your present analyses. Finally, the wording of the hypotheses is not exactly the same. In the present paper, please explain why you deviate from the preregistration. b. Open data & material: I encourage you to upload your data and material as online supplementary material, so that both readers and reviewers can rerun analyses or check the operationalizations and implementation of the study in more detail. If it should not be possible to upload the data (e.g., due to data protection laws), the material (e.g., due to copyright infringements), and the analysis scripts, please state those explicit reasons why it is not possible. Until now, the OSF repository only includes the preregistration, the figure, and the tables—all of them are already included in your full paper submission. c. Please list all the items that were used (either in an appendix or as online supplementary material; for example, on osf.io). In the OSF project of Du et al. (2019), you provide a well-organized codebook of all the relevant variables. This is clearly missing for this submission. Minor issues: – On p. 4, lines 56–58, you provide an example for a reinforced downward spiral (Orben et al., 2019). I’m not sure if this example suits your rationale because they found rather small effects and results were contingent on gender (a variable you do not include in your considerations). Moreover, the following sentence (lines 58–60) does not tell the reader why this would “provide a more complete picture.” – p. 6: The study mentioned in Lines 113–115 (Bauer et al., 2017) is a longitudinal diary study, not a cross-sectional one.
References Bauer, A. A., Loy, L. S., Masur, P. K., & Schneider, F. M. (2017). Mindful instant messaging. Mindfulness and autonomous motivation as predictors of well-being and stress in smartphone communication. Journal of Media Psychology, 29(3), 159-165. https://doi.org/10.1027/1864-1105/a000225 Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). Methodology in the social sciences. Guilford Press. Dienlin, T., Johannes, N., Bowman, N. D., Masur, P. K., Engesser, S., Kümpel, A. S., Lukito, J., Bier, L. M., Zhang, R., Johnson, B. K., Huskey, R., Schneider, F. M., Breuer, J., Parry, D. A., Vermeulen, I., Fisher, J. T., Banks, J., Weber, R., Ellis, D. A., . . . Vreese, C. de (2020). An agenda for open science in communication. Journal of Communication, Article jqz052, 1–26. https://doi.org/10.1093/joc/jqz052 Du, J., Kerkhof, P., & van Koningsbruggen, G. M. (2019). Predictors of social media self-control failure: Immediate gratifications, habitual checking, ubiquity, and notifications. Cyberpsychology, Behavior and Social Networking, 22(7), 477–485. https://doi.org/10.1089/cyber.2018.0730 Du, J., van Koningsbruggen, G. M., & Kerkhof, P. (2018). A brief measure of social media self-control failure. Computers in Human Behavior, 84, 68–75. https://doi.org/10.1016/j.chb.2018.02.002 Geiser, C., Keller, B., Lockhart, G., Eid, M., Cole, D., & Koch, T. (2015). Distinguishing state variability from trait change in longitudinal data: The role of measurement (non)invariance in latent state-trait analyses. Behavior Research Methods, 47(1), 172–203. https://doi.org/10.3758/s13428-014-0457-z Liu, D., Baumeister, R. F., Yang, C., & Hu, B. (2019). Digital communication media use and psychological well-being: A meta-analysis. Journal of Computer-Mediated Communication, 24(5), 259–273. https://doi.org/10.1093/jcmc/zmz013 Orben, A., Dienlin, T., & Przybylski, A. K. (2019). Social media's enduring effect on adolescent life satisfaction. Proceedings of the National Academy of Sciences of the United States of America, 116(21), 10226–10228. https://doi.org/10.1073/pnas.1902058116 Slater, M. D. (2007). Reinforcing spirals: The mutual influence of media selectivity and media effects and their impact on individual behavior and social identity. Communication Theory, 17(3), 281–303. Slater, M. D. (2015). Reinforcing spirals model: Conceptualizing the relationship between media content exposure and the development and maintenance of attitudes. Media Psychology, 18(3), 370–395. https://doi.org/10.1080/15213269.2014.897236 Stavrova, O., & Denissen, J. (2020). Does using social media jeopardize well-being? The importance of separating within- from between-person effects. Social Psychological and Personality Science. Advance online publication. https://doi.org/10.1177/1948550620944304 Reviewer #2: Dear authors, I read your manuscript with great joy. As you know, in the literature on social media, self-control, and well-being there is a profound lack of longitudinal studies that differentiate between- and within-person effects. Also, much of the literature is opaque on data, not up-to-date with regard to Open Science practices, and often suffers from claims that simply aren’t backed by data. You make an important contribution to that literature by addressing all of these gaps. You a) preregistered your research, b) separate within- and between-person effects, c) intend to share your data and materials, and d) are modest in your conclusions. Also, you manuscript was clearly written and a pleasure to read. My compliments on your work. During my review, I carefully examined: • All parts of the paper • The preregistration on the OSF I did not: • Review the supplementary materials carefully (lack of time) • Review the analysis (because you didn’t provide data or code) I will list my thoughts, concerns, and suggestions for improvement chronologically. Theory Section First, the theoretical rationale wasn’t always clear to me. You define social media self-control failure and mindfulness, but it’s not clear what psychological mechanism links those two. Self-control in Psychology more and more takes the form of a motivational conflict that states that people constantly survey the value of options and contrast them with the value of their current activity/goal (Berkman et al., 2017; Kurzban et al., 2013). It’s not clear how social media self-control failure fits here. By definition, it’s a self-control process, but the mechanism behind it doesn’t become clear (yet) from the paper. You say that social media present a temptation and also speak of goal-conflict. But it’s not clear a) where that value comes from, b) in what circumstances value conflicts with another goal, c) under what circumstances people give in to that temptation. You state that this presents a “potential impairment of a psychological function that underlies intended and sustained attention”. But before you can turn to attention, it’s important to inform the reader what role attention plays in the self-control process. For example, we know that high-value options attract attention (e.g., Anderson et al., 2011). As far as I know, there’s no consensus on the role of attention within self-regulation (Hofmann & Van Dillen, 2012; Inzlicht & Berkman, 2015). It appears you use attention as an outcome of the self-regulation process: There’s a high-value option available that conflicts with my goal, so I turn my attention there. But I’m missing the explicit theoretical link here. Also, you use attention as the link between self-control and mindfulness. In fact, you state: “Thus, mindfulness is a crucial psychological function for self-control in everyday life”. For one, this is a strong statement and I’d like to hear how mindfulness is such a central part of self-control. Conceptually, mindfulness isn’t just about self-control. As you’re aware, mindfulness describes more than just the awareness/attention component, but also evaluation etc. I’d invite the authors to elaborate here on which models of self-regulation and mindfulness they refer to. It’s unclear to me how mindfulness fits a self-regulation process other than potentially sharing a common, latent trait of attentional focus. If that’s not clear to readers, it’s not clear why you measured all facets of mindfulness and not just the attentional component. Second, most of your discussion of empirical evidence refers to concepts, not the theoretical mechanism. For example, page 6, line 102: social media use predicted burnout for those low in mindfulness. This study shares variables with yours, but I’m not sure how it supports the mechanism you propose. Mindfulness in the study is a moderator, but you use that study as support for your claim that social media self-control failure affects mindfulness directly. Another example, same page, line 108: you cite evidence that multitasking and attention were reciprocally related. I don’t see how this informs your study, unless you explain how multitasking stands in relation to social media self-control failure and how attention stands in relation to mindfulness (as a whole concept, not just the attention part). I kindly ask the authors to elaborate on their theoretical model and show how previous evidence relates to that model more clearly. Right now, most of the studies you cite just support that the concepts you study are related, but not how or why. Third, I ask you to consider removing all mentions of addiction from the manuscript or at least acknowledge that the concept is highly problematic and we should see it with skepticism. The literature is pretty clear that social media/internet/smartphone addiction simply is a mess of a concept/field (Aagaard, 2020; Abendroth et al., 2020; Panova & Carbonell, 2018; Satchell et al., 2020). It isn’t a clinically accepted diagnosis, so I strongly believe we should stop using it altogether until we have better evidence. I know that this is highly personal suggestion, so feel free to argue your stance in the revision letter. But I’d at least like to see an acknowledgment in the manuscript that studies use addiction, which is not a thing, but you consider it as self-control failure for reasons X and Y. Fourth, your introduction is overall rather negative. I think it would make the paper more balanced and present a better overview of the literature for readers if you also outline the positive sides of social media (Antheunis et al., 2015; Bayer et al., 2015; Domahidi, 2018; Meier et al., in press). I think that would fit a discussion of the high value of social media because benefits and enjoyment from social media can potentially explain the high value that’s responsible for self-control failure. Method & Results First, I was disappointed to see that you didn’t share your code and data. This is sort of a big deal to me: I only review papers if they share data and materials. I accepted the invitation to review because I was under the impression that this would be the case. I don’t understand why you wouldn’t share. The journal reveals your names anyway and you can share data and code anonymously, like you did with the SI on the OSF. I was therefore not able to review the analysis. Second, it would’ve been good to see a power analysis. In your preregistration, you say you will collect 600 participants for the first wave. Why this number? The references you provide there don’t tell me how you arrived at that number. What was the smallest effect size of interest you aimed to detect (Lakens et al., 2018)? Without that, it’s hard to tell whether you’re overpowered, underpowered, or properly powered. Alternatively, I’m also happy with simply acknowledging that it’s hard to calculate power/sensitivity and you merely collected as much data as possible (Albers & Lakens, 2018), http://daniellakens.blogspot.com/2020/08/feasibility-sample-size-justification.html. Third, I was surprised to see how quickly participants filled in the survey. Did they really take less than four minutes for the final survey? That strikes me as low given that you presented them with several scales. Did you inspect the distributions of response time and check for rushing/straightlining of survey responses? Out of experience, 5% of a sample is potentially poor quality data. (If that’s in the SI, I must’ve overlooked it and apologize. Then I’d ask the authors to include this in a footnote.) Fourth, the exclusion criterion 3 resulted in three exclusions, but also wasn’t preregistered. Please make that clear in the paper. Fifth, the model fitting procedure isn’t completely clear to me. You rely on SEM, yet you aggregate all scales to a mean indices. I’d argue for explicitly modelling measurement error (so a fully latent SEM) rather than pretending it’s not there (aggregating variables). If you have good reason why you didn’t go fully latent, I’m open to hearing them in your response letter. (Full disclosure: I’ve also aggregated for path models before, but have come to believe that that’s problematic.) Sixth, your preregistration is rather vague on the analyses you’ll run. I’d acknowledge that in the manuscript. Overall, you’re extremely transparent (again: my compliments), and you ran several models to make sure you’re not cherry picking your results (and again: that’s very impressive). As you discuss constraints vs. no constraints I’d add that you do this because you didn’t restrict your researcher degree’s of freedom enough in the preregistration. I think that’d be fully transparent and current gold standard. Well, more like future gold standard, but why not start now. Seventh, to continue the trend of transparency: You also collected other variables. I think it deserves a paragraph (or footnote) explaining what those variables are, a link to the other preregistration, to show readers that you indeed were working on separate research questions. Eight, I think your variables are conceptually related enough to adjust your alpha to control for the family-wise error rate. You run three models times 4 for different constraints. I’m fine with not adjusting your alpha within the comparison of the four different constraint models. But between the three models, I’d argue that the outcomes are related enough that you want to guard yourself against possible false-positives. Some paths will be nonsignificant after such a correction, but results shouldn’t matter for publication; the method does, and your method is strong. Ninth, you interpret nonsignificant effects as null effects, against which I advise caution. Absence of evidence isn’t evidence of absence, meaning just because a path isn’t significant, you can conclude that it doesn’t matter/the effect isn’t there (Greenland et al., 2016). So I’d recommend to either change statements of the absence of effects to “nonsignificant” or rely on Bayesian approaches which can provide evidence for the null (Dienes, 2019). Tenth, I’d like to see at least a discussion of why you chose a four-month lag. You talk about the lag in the discussion, but I’d like to hear your reasoning from when you planned the study. Discussion First, the discussion often doesn’t go beyond mere description and summary. This point relates back to my thoughts on the introduction: Without a clear theoretical rationale in the introduction, it’s not clear what the results mean or don’t mean for self-control/mindfulness. In the discussion, you refer to Slater’s dynamic model, but without details on the model, readers won’t know how to interpret your findings in support or lack of support of that model. Second, your falsification criteria aren’t clear. You hypothesized reciprocal effects. So under what circumstances would you say the data fully support your hypothesis, partially support it, or don’t support it? Would a reciprocal effect across two waves be enough for full support? At what point would you consider the hypothesis falsified; do all cross-lagged paths need to be nonsignificant or the majority? You’re extremely careful in the discussion, so this isn’t a lot of work to fix, merely a sentence or two more on what you would’ve considered convincing evidence. Third, you only talk about possible positive effects of social media use in the discussion after your predictions didn’t pan out. At that point, they come a bit out of the blue, so I’d kindly ask you to discuss possible positive effects in the intro (see comment on intro). Fourth, there are some inconsistencies in your recommendations. On line 396 you say that mindfulness training might be beneficial, but the relationship was partly reciprocal, so strengthening self-control could be just as beneficial. You acknowledge that yourself in the last sentence of the discussion. Fifth, I disagree with you on the role of mindfulness. On line 438 you imply that mindfulness might be a mediator, but it’s not clear why this should be the case. Just because social media self-control failure predicts mindfulness and mindfulness predicts well-being at some point during the eight-month window doesn’t mean mindfulness is a mediator. You didn’t formally test mediation and there could be any number of third variables or reverse causality at work. Maybe I misunderstood the argument, so I’d ask the authors to clarify – especially because the rest of the manuscript is so careful in its conclusions. Other Please provide more information on the OSF when you upload the data. I kindly ask for codebooks to make it possible for readers to reproduce your analysis. In my view, computational reproducibility is a minimum requirement (Goodman et al., 2016; Hardwicke et al., 2020). In addition, R packages will certainly change in the future, which might break your code. Therefore, please at least include the output of sessionInfo() in the R script (Peikert & Brandmaier, 2019). Random thoughts • Page 6: The dynamic model of media effects is quite prominent, but you don’t explain it. Not all readers will be familiar with it. • Page 7, line 138: The idea that a lot of time online will decrease offline circles has a lot of evidence against it (Antheunis et al., 2015; Dienlin et al., 2017; Przybylski & Weinstein, 2017) • You might want to consider using omega instead of alpha (Hayes & Coutts, 2020) • I don’t understand what “reciprocal effects might be prospectively observed” (line 390) means Conclusion Again, I want to express to the authors that I enjoyed reading your manuscript. My compliments on your work. I wrote a lot but that’s because I think you present strong work and as a review I’d like to help make your manuscript even stronger. I think your paper could go to print as is and would be a valuable contribution to the field, but a revision can make it a central piece of work in the discourse on self-control and social media. I look forward to seeing it in print after a revision. If you believe some (or all) of my concerns are a result of a misunderstanding from my side, if you have any questions regarding my review, or you disagree with any of the points I raised, please feel free to contact me. I did not make confidential remarks to the editor. I hope my suggestions were of help and I look forward to reading more of your work in the future. Until then I wish you all the best. I always sign my reviews, Niklas Johannes Aagaard, J. (2020). Beyond the rhetoric of tech addiction: Why we should be discussing tech habits instead (and how). Phenomenology and the Cognitive Sciences. https://doi.org/10.1007/s11097-020-09669-z Abendroth, A., Parry, D. A., le Roux, D. B., & Gundlach, J. (2020). An Analysis of Problematic Media Use and Technology Use Addiction Scales – What Are They Actually Assessing? In M. Hattingh, M. Matthee, H. Smuts, I. Pappas, Y. K. Dwivedi, & M. Mäntymäki (Eds.), Responsible Design, Implementation and Use of Information and Communication Technology (pp. 211–222). Springer International Publishing. https://doi.org/10.1007/978-3-030-45002-1_18 Albers, C. J., & Lakens, D. (2018). When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. Journal of Experimental Social Psychology, 74, 187–195. https://doi.org/10.17605/OSF.IO/B7Z4Q Anderson, B. A., Laurent, P. a, & Yantis, S. (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences, 108(25), 10367–10371. https://doi.org/10.1073/pnas.1104047108 Antheunis, M. L., Vanden Abeele, M. M. P., & Kanters, S. (2015). The impact of Facebook use on micro-level social capital: A synthesis. Societies, 5(2), 399–419. https://doi.org/10.3390/soc5020399 Bayer, J. B., Campbell, S. W., & Ling, R. (2015). Connection cues: Activating the norms and habits of social connectedness. Communication Theory, 26(2), 128–149. https://doi.org/10.1111/comt.12090 Berkman, E. T., Hutcherson, C. A., Livingston, J. L., Kahn, L. E., & Inzlicht, M. (2017). Self-control as value-based choice. Current Directions in Psychological Science, 26(5), 422–428. Dienes, Z. (2019). How Do I Know What My Theory Predicts? Advances in Methods and Practices in Psychological Science, 2515245919876960. https://doi.org/10.1177/2515245919876960 Dienlin, T., Masur, P. K., & Trepte, S. (2017). Reinforcement or displacement? The reciprocity of FTF, IM, and SNS communication and their effects on loneliness and life satisfaction. Journal of Computer-Mediated Communication, 22(2), 71–87. https://doi.org/10.1111/jcc4.12183 Domahidi, E. (2018). The associations between online media use and users’ perceived social resources: A meta-analysis. Journal of Computer-Mediated Communication, 23(4), 181–200. https://doi.org/10.1093/jcmc/zmy007 Goodman, S. N., Fanelli, D., & Ioannidis, J. P. A. (2016). What does research reproducibility mean? Science Translational Medicine, 8(341), 341ps12-341ps12. https://doi.org/10.1126/scitranslmed.aaf5027 Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2016). Statistical tests, P values, confidence intervals, and power: A guide to misinterpretations. European Journal of Epidemiology, 31(4), 337–350. https://doi.org/10.1007/s10654-016-0149-3 Hardwicke, T. E., Bohn, M., MacDonald, K., Hembacher, E., Nuijten, M. B., Peloquin, B., deMayo, B., Long, B., Yoon, E. J., & Frank, M. C. (2020). Analytic reproducibility in articles receiving open data badges at Psychological Science: An observational study. https://doi.org/10.31222/osf.io/h35wt Hayes, A. F., & Coutts, J. J. (2020). Use Omega Rather than Cronbach’s Alpha for Estimating Reliability. But…. Communication Methods and Measures, 14(1), 1–24. https://doi.org/10.1080/19312458.2020.1718629 Hofmann, W., & Van Dillen, L. (2012). Desire: The new hot spot in self-control research. Current Directions in Psychological Science, 21(5), 317–322. https://doi.org/10.1177/0963721412453587 Inzlicht, M., & Berkman, E. (2015). Six questions for the resource model of control (and some answers). Social and Personality Psychology Compass, 9/10, 511–524. https://doi.org/10.1111/spc3.12200 Kurzban, R., Duckworth, A., Kable, J. W., & Myers, J. (2013). An opportunity cost model of subjective effort and task performance. Behavioral and Brain Sciences, 36(06), 661–679. https://doi.org/10.1017/S0140525X12003196 Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2), 259–269. Meier, A., Domahidi, E., & Günter, E. (in press). Computer-mediated communication and mental health: A computational scoping review of an interdisciplinary field. In S. Yates & R. E. Rice (Eds.), The Oxford handbook of digital technology and society. Oxford University Press. Panova, T., & Carbonell, X. (2018). Is smartphone addiction really an addiction? Journal of Behavioral Addictions, 7(2), 252–259. https://doi.org/10.1556/2006.7.2018.49 Peikert, A., & Brandmaier, A. M. (2019). A Reproducible Data Analysis Workflow with R Markdown, Git, Make, and Docker [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/8xzqy Przybylski, A. K., & Weinstein, N. (2017). A large-scale test of the Goldilocks Hypothesis: Quantifying the relations between digital-screen use and the mental well-being of adolescents. Psychological Science, 1–12. https://doi.org/10.1177/0956797616678438 Satchell, L., Fido, D., Harper, C. A., Shaw, H., Davidson, B. I., Ellis, D. A., Hart, C. M., Jalil, R., Bartoli, A. J., Kaye, L. K., Lancaster, G., & Pavetich, M. (2020). Development of an Offline-Friend Addiction Questionnaire (O-FAQ): Are most people really social addicts? [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/7x85m ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Niklas Johannes [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-20-21281R1 The reciprocal relationships between social media self-control failure, mindfulness and wellbeing: A longitudinal study PLOS ONE Dear Dr. Du, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Both reviewers are impressed with the way that you have handled their suggestions, and with the work that you have put into the revision and your explanation of the revisions. However, there are some remaining minor comments and suggestions that both the Reviewers and myself believe are important to address. The reviewers have again provided some thoughtful and detailed comments outlining these areas of improvement. For these reasons I am inviting a revision that addresses their comments. Please submit your revised manuscript by Jun 07 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Fuschia M. Sirois, PhD Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I already liked the first version of the manuscript and I think you did an outstanding job on revising the manuscript, carrying out additional helpful analyses, and crafting the response letter in a generally convincing and well-organized way. I was convinced by most of your answers. Nevertheless, some minor clarifications would help further improve the paper. 1. Regarding my previous Point 2, thanks for uploading the codebook, the data, and the R code. I ran the code and it generally worked well (but see 1a and 1b for necessary modifications). a) To foster reproducibility, it would help to include some instructions or lines of code how to load the data before running the code, for instance, in a readme file or, even better, directly within the code file. b) Please insert that the tidyverse (or similar) package needs to be loaded. Otherwise, the %>% cannot be interpreted properly. c) The sessionInfo() makes only sense if its output is included in a static file (e.g., a html output of an RMarkdown document). Only then, it will include the necessary info to reproduce the specific environment of your analyses. Otherwise, it will just produce the info of the session the reproducer ran. d) To view your analyses and results, a static output file (e.g., a html output of an RMarkdown document) may be of great benefit to the readers as well, because they don’t have to carry out the cumbersome bootstrapping procedures for all the models but can just look up what you’ve done and what the output looks like. 2. Ad 2b), it’s great that you’ve conducted additional analyses including control variables and integrated the new findings. Here, three minor issues arise: a) You should cite your own findings as a reason why you’ve included exactly these control variables (Du et al., 2019). b) If I’m not mistaken, Mulder and Hamaker (2020) is not yet listed in the references of your paper. c) Compared to the paper’s tables, the S2 Appendix is a bit awkward to read. I recommend a more straightforward depiction of the relevant findings to make it easier for the readers to compare the paths across models. 3. The only point where I disagree with you concerns 2a and Reviewer 2’s fifth point. a) I don’t think that referring to “common practice” is a sound argument for not moving into a latent variables framework. On the same basis, you could have just carried out a traditional cross-lagged model. You would have found even more studies doing so but that not implies that this is the best or appropriate way to do it. b) I also don’t buy that there’s little practice in moving to a latent framework. You’ve already drawn on Mulder and Hamaker (2020) to include the control variables. They also wrote a section on the multiple-indicator RI-CLPM. Ironically, the “common” RI-CLPM is a special case of the more general multiple-indicator RI-CPLM and both are closely connected to latent growth curves with structured residuals (Thomas et al., 2021; Usami et al., 2019; Zyphur, Allison, et al., 2019; Zyphur, Voelkle, et al., 2019). c) Similar to Reviewer 2, please don’t get me wrong: I don’t urge you to move to a latent variables framework if you have good reasons to not do so. From my perspective, however, the reasons you’ve mentioned thus far are not enough to convince me. Moreover, the CFAs reported in S3 rather obscure than illuminate the factorial validity of the constructs. For instance, for three-item-single-factor models you have to impose constraints to make them identifiable. I recommend running multi-construct latent state–trait models (Eid et al., 1994; Geiser, 2020; Steyer et al., 2015) or, at least, CFAs including all latent constructs of interest at the same measurement occasion to examine factorial structure and potential cross-loadings. References Du, J., Kerkhof, P., & van Koningsbruggen, G. M. (2019). Predictors of social media self-control failure: Immediate gratifications, habitual checking, ubiquity, and notifications. Cyberpsychology, Behavior and Social Networking, 22(7), 477–485. https://doi.org/10.1089/cyber.2018.0730 Eid, M., Notz, P., Steyer, R., & Schwenkmezger, P. (1994). Validating scales for the assessment of mood level and variability by latent state-trait analyses. Personality and Individual Differences, 16(1), 63–76. https://doi.org/10.1016/0191-8869(94)90111-2 Geiser, C. (2020). Longitudinal structural equation modeling with Mplus: A latent state–trait perspective. Guilford Press. Mulder, J. D., & Hamaker, E. L. (2020). Three extensions of the random intercept cross-lagged panel model. Structural Equation Modeling: A Multidisciplinary Journal. Advance online publication. https://doi.org/10.1080/10705511.2020.1784738 Steyer, R., Mayer, A., Geiser, C., & Cole, D. A. (2015). A theory of states and traits—Revised. Annual Review of Clinical Psychology, 11, 71–98. https://doi.org/10.1146/annurev-clinpsy-032813-153719 Thomas, F., Shehata, A., Otto, L. P., Möller, J., & Prestele, E. (2021). How to capture reciprocal communication dynamics: Comparing longitudinal statistical approaches in order to analyze within- and between-person effects. Journal of Communication. Advance online publication. https://doi.org/10.1093/joc/jqab003 Usami, S., Murayama, K., & Hamaker, E. L. (2019). A unified framework of longitudinal models to examine reciprocal relations. Psychological Methods, 24(5), 637–657. https://doi.org/10.1037/met0000210 Zyphur, M. J., Allison, P. d., Tay, L., Voelkle, M. C., Preacher, K. J., Zhang, Z., Hamaker, E. L., Shamsollahi, A., Pierides, D. C., Koval, P., & Diener, E. (2019). From data to causes I: Building a general cross-lagged panel model (GCLM). Organizational Research Methods, 1094428119847278. https://doi.org/10.1177/1094428119847278 Zyphur, M. J., Voelkle, M. C., Tay, L., Allison, P. d., Preacher, K. J., Zhang, Z., Hamaker, E. L., Shamsollahi, A., Pierides, D. C., Koval, P., & Diener, E. (2019). From data to causes II: Comparing approaches to panel data analysis. Organizational Research Methods, 1094428119847280. https://doi.org/10.1177/1094428119847280 Reviewer #2: Dear authors, Thank you for the opportunity to read this manuscript again. I liked the original admission and am impressed by the work you invested into the revision. The paper has become even more coherent and you addressed all concerns that I had (and, from my reading, the other reviewer had). My compliments. I think your work will make a valuable contribution to the literature and I look forward to seeing it in print. Below just a couple of random thoughts that I DON’T want you to address in a second round, but wanted to share anyway: • “Empirical evidence confirmed” � I’d prefer “support” • One more sentence on the possible mechanism self-control failure � lower mindfulness would’ve been helpful. Something like: “Repeatedly failing to respond to affective cues, thereby experiencing self-control failure, might further decrease people’s sensitivity to those cues over time. In other words, self-control failure � lower mindfulness”. Just a suggestion for future papers, no need to implement here. Actually, you have a similar sentence in the discussion. • Really strong justification for distinction of trait vs. state. • Exemplary transparency on reporting the first study based on these data. • I’m not convinced bootstrapping solves the family-wise error rate. But I’m not a statistician, so I need to read up on this. • Again: Exemplary treatment of the falsification criteria. • I’m personally skeptical of the mindfulness interventions (see https://doi.org/10.1177/1745691617709589), but not sure that criticism affects your conclusions about mindfulness treatments in the discussion. A couple of thoughts (independent of the content of the paper) on reproducibility, which can be addressed within half an hour: • I tried reproducing your analysis: There’s no “DurationT1” variable in the data set. • You use dplyr pipes, but don’t load any tidyverse packages. • It would be ideal if you tried running the entire script from beginning to end on a new machine to check whether results can be reproduced. • The output of sessionInfo() as a text file should ideally be added to the project for reproducibility. • I had to request access to the OSF project – can you remove me again as a collaborator? One more time: My compliments and congratulations on your work! Niklas Johannes ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Frank M. Schneider Reviewer #2: Yes: Niklas Johannes [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
The reciprocal relationships between social media self-control failure, mindfulness and wellbeing: A longitudinal study PONE-D-20-21281R2 Dear Dr. Du, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Fuschia M. Sirois, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-20-21281R2 The reciprocal relationships between social media self-control failure, mindfulness and wellbeing: A longitudinal study Dear Dr. Du: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Fuschia M. Sirois Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .