Peer Review History
| Original SubmissionFebruary 2, 2021 |
|---|
|
PONE-D-21-03616 Measuring website interaction: A comparison of two industry standard analytic approaches using 86 websites PLOS ONE Dear Dr. Jansen, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewers agree that there is clear merit in this work, with some positive statements to this effect. However, some reviewer comments are requests for clarification and this needs to be addressed. In particular, reviewers have commented on the metrics used, their appropriateness and how they are used and interpreted. There are also numerous comments about the statistical analysis that require a response, and/or clarification and/or update in the artlcle. Reviewers also converge on requesting more reflection and discussion of results and implications of the work. Please submit your revised manuscript by Feb 26 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Hussein Suleman, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that in order to use the direct billing option the corresponding author must be affiliated with the chosen institute. Please either amend your manuscript to change the affiliation or corresponding author, or email us at plosone@plos.org with a request to remove this option. 3. We note that Figures 1 and 2 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright. We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission: a. You may seek permission from the original copyright holder of Figures 1 and 2 to publish the content specifically under the CC BY 4.0 license. We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text: “I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.” Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission. In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].” b. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only. 4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: General comment: The authors had made a meritorious effort and tried effectively to compare the produced values of two analytics platforms. The literature review that has been used is in quite good relationship with the research problematic. Moreover, an important effort has been made to address the practical contribution of the paper to other researchers or practitioners. However, there are some major issues. First the selection of bounce rate metric is not thoroughly aligned with the meaning of duration (see further justification in the comments 8-9 below). Second, there is some difficult understanding regarding the statistical tests that have been selected and what was finally presented in results (See comments 11-15). Lastly, both theoretical and practical contributions are unfolded in an organized and logical way. But it will be very useful to add even more practical contributions from a competitive intelligence point of view. What this manuscript offers compared to the prior relative research approaches on the field of Web Analytics validity and competitive intelligence strategy? And how the up-to-date theoretical scientific approaches are benefited from this paper? Once more, well done for your effort, and I hope the forthcoming suggestions/comments will help you to optimize the value of this paper. 1. Line 32. we need to be more explicit here. What other uses are available based on the citation 2? 2. Table 1. Line 76-77. In the third column the Ahrefs tool is more a backlink checking tool and not a behavioural analytics platform. Better not to include it and refer some other tool more relevant with the web behavioural analytics and not with the off-site optimization and backlinks building. 3. Lines 81-83. The same thing is conducted also with SEMrush as well. And more specifically, SEMRush provides explicit statistics on a daily basis for competitors through graphs, figures etc. And also, the triangulation perspective is adopted on SEMrush, just like SimilarWeb. So why we choose SimilarWeb compared to the others? In a general sense, it will reinforce furtherly the justification of the paper if we put a clear paragraph or a table referring that compared to the others, we choose SimilarWeb for these reasons (one, two, three, four, six, ten and so on reasons). 4. Line 101. ok this is good! But for what reason? the generalization of results to a wide range of analytics technologies what gives to the practical and research community? For example, greater competitive intelligence strategy? Better WA platforms design and capabilities? Something like that. 5. Line 102. Hmm, there are other metrics within these platforms. Mostly user-centric (average video duration, avg videos watched in channel, different types of engagement with a post, followers/subscribers gain and so on). Hence, the results of this study cannot impact on several other domain, but only between web analytics platforms that estimate only websites traffic. Better not to include this assumption. 6. Line 122-123. this seems to be a little be general sentence about their findings. What these correlations specifically depict? And actually, I suppose, that the purpose here in our paper is to present prior works that focus on the comparison of web traffic platforms, to find differences and fluctuations among them. Not to compare web traffic stats with organizational performance. So, it needs to be more explicit here. 7. Line 133-134. Please guys, reform this sentence. Personally, I believe that this is a little bit arrogant, and does not express academic ethos. It just like that saying "ok you there Scheitle and colleagues, you don't have money, but we have money, and we can do research and you cannot ;) . Probably it is true, but better to redefine this sentence. 8. Line 179. hmm ok Frequency is related with total visits per a determined time-range, Reach is related with the unique visitors. But duration is related mostly with visit duration and page per visit as metrics. Bounce rate express the immediate abandonment from a website without proceeding to any kind of interaction with the content thus this mean zero duration. Probably we can assume here that bounce rate is related mostly with content usability and representativeness of users search terms with what they retrieved as websites’ content from search engine results. That is, if we do not have a good alignment of search term and content, then we have high bounce rate and vice-versa. Or if we have poor usability, then bounce rate is increased as well. So better change the duration with something else more specific that is aligned in a better way with the bounce rate. In a general sense, the involvement of bounce rate metric and its inclusion under the meaning of measuring duration is one of the main issues within the paper. The metric itself is a little bit vexed and you pointed this out in your argumentation including several related references. In continuation of this comment, I try to help you more with another one comment related with bounce rate included in Table 2. 9. Table 2. Column 3. we mentioned “A bounced visit is the act of a person immediately leaving a website before any interaction can reasonably occur” This point is conflicting with the below one point "measure of duration". Bounce rate is not a measure of duration, so if there is no interaction, there is no duration. And based on Google as you stated below within the table << Bounce rate is single-page sessions divided by all sessions, or the percentage of all sessions on your site in which users viewed only a single page and triggered only a single request to the Analytics server. These single-page sessions have a session duration of 0 seconds since there are no subsequent hits after the first one that would let Analytics calculate the length of the session. >> Probably you take it from here. at: https://support.google.com/analytics/answer/1009409?hl=en#:~:text=Bounce%20rate%20is%20single%2Dpage,request%20to%20the%20Analytics%20server. Therefore, I am afraid that we cannot use Bounce rate within the whole paper. And I do not understand why we do not use pages per session or time spent as metrics for measuring duration. This also measures the depth of exploration. 10. Line 238. Reading the citation (number 107) and the paper itself from the acm, it is a little bit fuzzy how large-scale machine learning on a social media such as twitter is related with the SimilarWeb standard methods as it is mostly a website traffic intelligence tool and not a social media competitive intel platform. 11. Line 244-245. How confident we are that this linking process extracts the specific analytics from google analytics without deviations from the original one source, namely the GA platform? Ok till now, we are sure that the provided GA data within the similar web platform have differentiations with the provided similar web data. Ok very good on that. But are we sure that GA data within the SimilarWeb are the same with the original data extracted from GA platform for the examined websites? In other words, do we proceed into a preliminary comparison at the same time-period between the extracted google analytics data from the two platforms, that is original GA and GA data as they included within SimilarWeb? Or can we ask the admins of these Google Analytics Accounts if they can ensure -even in a small sample of the websites (5 or 10 of the total 86)- that the provided Google Analytics data from the Google Analytics platform are the same with the provided final Google Analytics data from the Similar Web? This for sure, will overhaul the trustworthiness of our research sample and also the validity of our methodology. 12. Line 255-256. Well, we do not agree into this assumption guys. Who says that the rule of thumb is about 30 websites and not 31 or 29 for descriptives? Better reinforce it with a citation here. You can retrieve it even from a statistical perspective paper (such as the citation 109 that you have already used). Or from the prior approaches that related with web analytics platforms comparisons and their gathered sample compared to ours in this paper. As it is now, is more an opinion, and not a documented justification. 13. Line 267-274. 1) Ok, if someone search, based on literature review, we need normal distribution to execute paired t-test. Now based on our implications here in this paragraph, we do not have normal distribution at the initial dataset. And indeed, after downloading the file from the Supporting Information, we discover very high skewness values within the items. Also we extract a low value of Shapiro-Wilk which has been conducted for testing normality and linearity of the sample. 1)Therefore, we need first a non-parametric test to prove that our data are not normally distributed or in other words to prove that all the variables do not follow normal distribution (we can also prove it with Wilcoxon signed rank test, the Mann-Whitney U Test and the Kruskal-Wallis test). 2) After proving non-normality then we take on the Box-Cox transformation. And ok this is good lads, as we deployed it. After that, we argue here that we have normal distribution even there is a bit of skewness. But which is the normality value of the variables right now after the transformation? This is missing. So here we need to re-run a second test to prove that we transformed our data and now we are in the right order; we have the required normality to conduct paired t-test. Therefore, we need to conduct a normal distribution and stating that the results indicating that after the data has been transformed, we have a normal distribution. 3) After the transformation of the data through box-cox how they shaped? how they transformed. What numbers where existing previously and how they are shaped now after the transformation. It will be useful to provide a small sample (4-5 websites in the three variables) within a table on how the dataset was; and how the dataset has been transformed right now after the box-cox. 4) Thereafter, our method to adopt paired t-test will be furtherly reinforced by the citations you included (111 and 112) 14. Line 276-277. Hmm might be a little bit confusing for the reader. So, we conducted the tests on the transformed data. That is good. But we report the non-transformed values? Why this choice lads? Why we conducted the transformation? Probably to make the dataset normal-distributed. But we present the non-transformed values? Therefore, so why to conduct the transformation before? And actually, the non-transformed values without the transformation would give a greater clarity as we refer here. Sorry guys for not understanding this choice, but we need to be more explicit for the sake of the forthcoming readers. Thank you. 15. Line 323-325. Oh guys hold on a second. Here the Spearman coefficient comes from the sky, without mentioned nothing within the Methodology section about its scope and what will give to the readers. We have mentioned on the theoretical part some things about correlations, but reading and reading again the theory, I cannot understand what this correlation will practically gives to us. How we interpret it? that is why we correlate them? And why we use Spearman instead of Pearson? Secondly, spearman is deployed mostly on non-normally distributed datasets. Have we conducted the Sprearman on the non-transformed dataset or on the transformed one? If it is the latter, then it needs Pearson which is conducted mostly on normal distributions. In any case, if there is a reason for conducting Correlation Analysis then we must: A) Refer with clarity why we do this and what proves in support to the scope of the paper. B) Refer clearly in which dataset you have applied the Correlation Analysis. It is the non-transformed or transformed one. If it is the latter one, then Pearson is more appropriate. C) Include scatter plots for all the three correlations for the involved metrics. the high numbers of coefficients say almost nothing to a demanding reader. 16. Regarding figures 6-8. They need improvement. What these numbers mean both in vertical and horizontal axes? And especially on the horizontal one. Although the comparison through the line is comprehensible, the rest are not. Also, we can minimize the white space (where it is possible) by eliminating the range of the vertical access. 17. Line 405. We refer “that these ranked lists can be used for research and other purposes”. Ok but for what other purposes? this is a little be general. Better to be more explicit here and point out the other purposes. 18. Line 414. This citation (118) is related with the messy situation in Scientometrics and has nothing to do with the web analytics of websites. Better find something else, or remove it. 19. Line 420. there is no "installed correctly or the same on all the websites". The script is one. If it is installed, then produces numbers. If it is not, then no numbers. Of course, there are incorrections between the connections of GA with Google Ads, Search Console or other platforms and their produced metrics. But in case of the three metrics that have been used here, there are measured properly or indicated zero values if there is a problem in set up. In addition, if we have doubts about the proper installation of GA, why we do not use the Tag Assistant Legacy of Google as browser extension in our data collection section? This tool identifies errors in analytics installation (check here https://chrome.google.com/webstore/detail/tag-assistant-legacy-by-g/kejbdjndbnbjgmefkgdddjlbokphdefk?hl=en) 20. Line 433-438. Again, regarding the Bounce Rate metric. Well, this is a contradictory justification with the aforementioned definition of Bounce Rate as can be seen in table 2. And if we want to consider the Duration as the third central measurement of Web Analytics, why we choose bounce rate which is at least contentious for many cases in the literature review regarding duration validity? And not choosing the visit duration (SW) and the Avg. Visit Duration (GA) to make a comparison among them? This will eliminate all these doubts about bounce rate validity. 21. Line 525. These two citations on this line. The first one 119, refers issues about the setup errors of GA. However, none of these errors of administrators affect the three metrics that we involve here. For example, if we were involving demographics, then ok, we have validity problems. But none of the statements of Alex Ramadan affect the Total visits, Unique Visitors and Bounce Rate. The other link (citation 119) is broken as a 404 page. 22. Regarding reference list. Citations 28, 32, 33, 54, 55, 84 are broken or are not working properly. End of Comments/Suggestions. Thank you for this opportunity. Reviewer #2: This is a very well-written manuscript. Very easy to read. The material is well-organized. The manuscript deals with an important problem area: the accuracy of popular website analytics and traffic estimation services (e.g., Google Analytics and SimilarWeb). The manuscript identifies and addresses a research gap: a lack of academic research and interest in studying web analytics. To improve, can the authors provide more insight on why there is a lack of attention among academics in currently studying this phenomenon? In the abstract, rather than saying the accuracy of metrics provided by Google Analytics and SimilarWeb will be discussed, provide a short sentence or two that speaks to or describes the accuracy of these metrics. In the paper, provide more insight on what the impact of SimilarWeb providing conservative traffic metrics compared to Google Analytics actually means in terms of practice. Why should we care? Why is it important to know that SimilarWeb and Google Analytics can be used in a complementary fashion when direct website data is not available? How important is this to know? Elaborate more on the implications of this research. There are 143 references included in this paper. This is great, but over the top. I think the references can be reduced to a more significant subset. This would reduced the paper's word count. What impact do the study's findings have on user-centric, site-centric, and network-centric approaches to web analytics data collection identified earlier in the paper? The USA represents half of the 86 websites studied. News and media content represents 42% of the 86 websites. It would be good to further describe how the this skewed sample affects the findings and interpretation of results. Reviewer #3: The authors conducted a comparison between Google Analytics and SimilarWeb based on analytics metrics data. The results provide both theoretical and practical implications. The paper is clearly organized and well written. With some minor improvements this piece is worth publishing, and I have a few specific suggestions. First, the authors need to justify their selection of total visits, unique visitors, and bounce rates as the three metrics. Why excluding other common metrics such as time on site/page? Second, all three hypotheses are supported, but how does this help evaluate the accuracy of the two analytics services? I think it impossible to indicate which one is more accurate given the significant differences between them in terms of the three metrics. Finally, I suggest that the authors improve their discussion section by providing more insights into the causes for the differences between Google Analytics and SimilarWeb. A minor problem - I’m confused by the statement “The techniques used by SimilarWeb are similar to the techniques of other traffic services, such as Alexa, comScore, SEMRush, Ahrefs, and Hitwise.” (Page 5, Line 100). While Alexa and comScore user-centric, SEMRush, Ahrefs, and Hitwise are network-centric. Why “similar”? What are the “techniques”? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Georgios A. Giannakopoulos and Ioannis C. Drivas Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Measuring user interactions with websites: A comparison of two industry standard analytics approaches using data of 86 websites PONE-D-21-03616R1 Dear Dr. Jansen, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Hussein Suleman, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: A big "Well Done" to the authors. The have addressed all the suggestions and comments to improve the quality of the paper. Each kind of suggestion has been taken into consideration, while correcting all the things that probably confuse the forthcoming readers. One of the most crucial aspect (the bounce rate involvement) has also been overhauled with clarity and in a well-organized way. This is a tremendous effort of yours. One step further, you kept the bounce rate and add one more metric. I think that the paper now stands sufficiently and it constitutes a scientific work that holistically improves the Web Analytics research topic. Reviewer #2: The authors have adequately address all prior concerns that I (Reviewer #2) previously raised. They have also enriched the quality of the manuscript by adequately addressing all the detailed concerns outlined previously by Reviewer #1. The paper is ready for publication. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Prof. Georgios A. Giannakopoulos Reviewer #2: No |
| Formally Accepted |
|
PONE-D-21-03616R1 Measuring user interactions with websites: A comparison of two industry standard analytics approaches using data of 86 websites Dear Dr. Jansen: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Hussein Suleman Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .