Peer Review History
Original SubmissionAugust 14, 2020 |
---|
PONE-D-20-25471 The Simpsons did it: exploring the film trope space and its large scale structure PLOS ONE Dear Dr. García Sánchez, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 26 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Chi-Hua Chen, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly Reviewer #4: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: N/A Reviewer #4: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This paper provides insights in understanding the tropes that compose the films. The authors study data from TV Tropes and IMDb websites using scientometric and complex networks methods. The dataset is interesting and well described. The paper has a proper introduction into the research questions. However, there are a number of things regarding the description of methodology that could be improved. Below are comments that could help in making some sections more clear to a reader: 1. The authors often use "co-words network" and "co-occurence network" as metaphors to "co-tropes" or "co-films" network. This could be misleading and confusing, e.g. Algorithm 1, "Fig. 8 shows the spanning tree of the co-word network", ... 2. The authors use Jaccard index to compare tropes between two genres. The explanation "the Jaccard index is computed for each set of tropes within the genres in respect to the rest of genres tropes." is not precise. 3. The authors should define difference between community and cluster. Sometimes, it seems that authors use these two terms as synonyms. However, in l288, p8 they write: "This means that a film can belong to more than one cluster for the same community." 4. The authors should provide exact definition of density, centrality and central trope. 5. In Table 2, the authors should state that data for the period (2000,2020] is not complete. 6. In table 4 and 5, the authors should increase font size. Table 4 is not readable. Also, quality of the figures 5,6,8 and 9 is very poor and it is difficult to interpret the results. 7. The language needs to be improved since the manuscript contains a number of grammatical errors throughout it (the caption of Fig.1, etc). Reviewer #2: In this interesting manuscript, authors analyse the distribution and association of tropes in movies, I.e. what they call troposphere. They do this by relying on network-based and scientometrics tools - which makes completely sense, as keywords in papers are like tropes in movies. The manuscript is really interesting and well written. To be honest, I have no major comments on it; for me, it could be published as it is. I’m just adding below a couple of “ideas” (please disregard them if not feasible), and some minor comments. “Ideas”: Evolution of trope reception. Authors have calculated the relationship between tropes and their reception, i.e. the associated films’ popularity. Yet, can it be that this association is temporal dependent? Imagine a film introducing a completely new trope, something never seen before. That movie could be appraised for being really original. Now, if hundreds of movies start using the same trope… that may get boring, and the associated rating may drop. Is there any way of measuring this? (This last part reminds me of the YouTube channel CinemaSins… if you know it, you will surely understand why I’m suggesting this!) Trope prediction. I would have enjoyed a final (speculative and personal, of course) prediction in the conclusions. For instance, having seen the data, which tropes are going to gain momentum in the future? Minor comments: - Line 138: “quality and popularity of the films”. Is quality a good word here? Because it suggests an objective measure, such as how good a film is from a technical perspective… but it’s not clear to me how this can be measured, and if this is actually taken into account in the manuscript. - Line 257: “For the 100 most voted or rated tropes in genres.” Any rationale for choosing 100? - Algorithm 1. Is this really needed? I mean, anyone working in networks will know how to create a co-ocurrence network… It seems to me that the textual explanation is more than enough. - Line 280. “Clustering” may not be the best word here, as in complex network theory this reminds me of “assortativity”… why not just use modularity? Or “community detection”? At least, I would clarify that “clusters” and “communities” are used interchangeably in the paper, and that the former does not refer to transitivity. - Line 517. “overview of which of these tropes may be interesting to develop in the future”. Not really, if I understood this correctly. That is, ascending and declining tropes are in the same quadrant… so, how can they be discriminated? With a time-dependent analysis? - Some typos. Line 219: “an specific methodology” -> “a” Line 294: “To confirm that there is statistical significance between” I think “relationship” is missing… Same at line 427. Line 550: “gender”? Or genre? - On a side note, I find the abstract a little long. The third paragraph is perfect, but maybe the first two could be merged and eventually synthesised a little? In any case, this is more a question of style (and taste), not of scientific content. Reviewer #3: This is an interesting paper studying the plot of stories. While the results are relevant and worthy publishing, some issues should be addrssed before this paper can be considered adequate for publication: - Section of related works could be improved by mentioning similarities and differences with stories and texts modeled as co-occurrence networks. See and mention e.g. doi: 10.1209/0295-5075/114/58005 doi: 10.1016/J.INS.2018.02.047 . - some steps in the methodology should be clarified. For example, adjustments of the dataset is not informative. This could mean anything. - Methodological steps should be presented in a better way. For example, performance analysis might include performance measures and visualization. Alternatively, clustering analysis could include items 6, 7 and 8. - A descriptive analysis of the dataset could be provided when presenting the dataset. This can not be considered a main result. - It is not clear why Leiden algorithm have been used for this analysis. I suggest using SBM to check the robustness of results. - Could your methodology be a used in a practical contexts, e.g. to make any type of prediction. - It is not clear why word analysis mentioned here is related specifically to bibliometrics analysis. co-occurrence analysis have been used in many contexts other than scientometrics analysis, including e.g. in traditional text analysis: doi: 10.1140/epjb/e2008-00206-x doi: 10.1016/j.physa.2018.03.013 - Figures are in low quality. I suggest using a vector format. I can not read most of labels in the larger network. Reviewer #4: This paper investigates the nature of tropes in films using crowd-sourced data as its primary source material. The paper offers simple summary statistics of the data before applying statistical methods to develop network views in the spaces of tropes, genres and films. First, a caveat. This reviewer is a physicist with a background in statistics of large-scale cosmic structure and no experience in reviewing Film, Television and Media (FTVM) literature. Like most adult humans, I do have some experience watching movies and the Simpsons. The authors do not clear who is the primary audience for this work. Is this an academic work aimed at FTVM faculty? A practical work aimed at studio executives and/or independent filmmakers? An exploratory work aimed primarily at burnishing the reputations of the authors among a burgeoning group of trope analysts? The text is unclear. In my opinion, the present draft has too many deficiencies to warrant publication as is. A revised version could be acceptable if it addresses these key, enumerated issues. 1) The tone of the work is too authoritative given its exploratory nature. For example, the abstract states that “Developing plots or narrations requires understanding the relationships between tropes in existing works,…” This is an overstatement; there is no such requirement. Line 11 of the main text claims we will “understand” human behaviors from crowdsourced data when it would be better to say that such sources offer a “window toward understanding”. 2) The figures and table are often difficult to read, particularly in printed form. Figures 6 and 8 are particular examples in which important labels are hard to make out. Tables 4 and 6 have very small font sizes. 3) The authors should do more to critically address the quality of their data. Should all of the nearly 26,000 tropes be considered as independent? Is popularity on IMdB different from that on other ratings platforms? The authors did not provide rationale for why the specific data sets (TV Tropes.org; IMDb) were used over, or in exclusion of, other databases such as allthetropes.com; Metacritic, Google User Reviews: Rotten Tomatoes. While the authors acknowledge bias in database used, cross referencing across multiple review databases would potentially produce different complex network results. 4) Language in the methodology section is awkward and contains grammatical errors that negatively affect readability. Would recommend editing for English clarity. Overall I was hoping to learn something new from this paper but came away mostly confused. The authors would do well to embrace the “Less is more” philosophy here. A study confined to the top-100 most frequent tropes across the top 10 genres would be easier to digest compared to one that tackles everything all at once. Other issues: • Figures 1 and 3 would benefit from a log-log display (with special treatment for zeros), making the power-law nature of the frequency distributions more apparent. • Lines 61,2: Sample biases must be understood to contextualize the understanding derived from network analysis. • Line 80: Cite needed for Leiden algorithm. • Line 106: “solve” -> “address” • Line 114: “actual one” : how is the true trope nature of a film defined here? • Line 122: who is the “us” being referred to here? • Line 133: The community labels used here should be defined for the benefit of those unfamiliar with these terms. And/or a cite added. • Lines 233,4,7: Examples of awkward English (Point 4 above). • Line 281: A brief explanation of Jaccard index is warranted here. And/or move up explanatory equation. • Lines 439-465: Table 4 is exasperating, nearly impossible to read. The description of the results here is helpful, but the fact that the Shout Out trope is nearly universal suggests that this trope should be considered separately from the others. Again, less could be more here. • Line 513: grammar • Line 550: gender -> genre • Reference 1 is incomplete ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Massimiliano Zanin Reviewer #3: No Reviewer #4: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
The Simpsons did it: exploring the film trope space and its large scale structure PONE-D-20-25471R1 Dear Dr. García Sánchez, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Chi-Hua Chen, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors have adequately addressed the comments and I can recommend the manuscript for publication. However, I suggest the authors to carefully read the manuscript and correct grammatical errors. Reviewer #2: Authors have addressed all my comments, and the initial version of the manuscript was already good, so I'm OK with this version. Reviewer #3: All issues have been addressed by the author. The authors have clarified my previous concerns. I recommend this paper to be accepted. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Massimiliano Zanin Reviewer #3: No |
Formally Accepted |
PONE-D-20-25471R1 The Simpsons did it: exploring the film trope space and its large scale structure Dear Dr. García Sánchez: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Professor Chi-Hua Chen Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .