Peer Review History
| Original SubmissionDecember 8, 2021 |
|---|
|
PONE-D-21-37478The Impact of Visually Simulated Self-Motion on predicting object Motion – A Registered Report ProtocolPLOS ONE Dear Dr. Jörges, Thank you for submitting your Registered Report Protocol to PLOS ONE. After careful consideration, we feel that it has merit but requires some revision to meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. All of the reviewers share my enthusiasm for the potential of your study, and each made reasonable requests, suggests, and queries to help improve it so it will be suitable for acceptance upon completion. (Please note some of the auto-populated text provided by PLOS ONE might sound strange, as much of it is more suitable for a normal manuscript. It was clear to me and the reviewers what we were assessing here!) Please submit your revised manuscript by Mar 30 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Michael J Proulx, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Thank you for stating the following in the Acknowledgments Section of your manuscript: “BJ and LRH are supported by the Canadian Space Agency (CSA) (CSA: 15ILSRA1-York). LRH is supported by a Discovery Grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada (NSERC: RGPIN-2020-06093). The funders did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.” Please note that funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: “BJ and LRH are supported by the Canadian Space Agency (CSA) (CSA: 15ILSRA1-York). LRH is supported by a Discovery Grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada (NSERC: RGPIN-2020-06093). The funders did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.” Please include your amended statements within your cover letter; we will change the online submission form on your behalf. 3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions? The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Yes ********** 2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses? The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory. Reviewer #1: Yes Reviewer #2: Partly Reviewer #3: Yes ********** 3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors described where all data underlying the findings will be made available when the study is complete? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics. You may also provide optional suggestions and comments to authors that they might find helpful in planning their study. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The proposed experiment is solid. It is however not set up clearly in the introduction: why does this need to be done, what gap in the literature does it fill? It is unclear to me what we learn from this effort. For the rest, I have minor comments: First few sentences of abstract velocity and speed are both used intermixed, to my understanding they do not have the same meaning. Stick to velocity until you get specific about speed. P2, lines 13-18, the modelling results by Layton & Niehorster, https://doi.org/10.1371/journal.pcbi.1007397, is of relevance here. Further relevant to this page is https://doi.org/10.1177/2041669517708206. P2, line 32: is there a “we” missing in this sentence about commitment? More generally, I am wondering why this complicated paragraph is needed at all, or at least it could be built up differently. It has been shown previously that for static observers of simulated self-motion, flow parsing is incomplete, i.e., not all of the self-motion is removed from the retinal motion of the object when judging object motion (see e.g. several warren & Rushton work, Niehorster & Li, 2017). You can then choose to speculate on some reasons, but label them as such from the beginning, the start of the story is that flow parsing is incomplete. That makes this easier to read and sets up the story more directly. P2-3: I think reference should be made to the work of Wertheim, e.g. Wertheim, A. H. (1994). Motion perception during self motion: The direct versus inferential controversy revisited. Behavioral and Brain Sciences, 17(2), 293–311. https://doi.org/10.1017/S0140525X00034646 Wertheim, A. H. (2008). Perceiving motion: Relativity, illusions and the nature of perception. Netherlands Journal of Psychology, 64(3), 119–125. https://doi.org/10.1007/BF03076414 He has theories and experimental work about how visual reference signals indicating the velocity of the eyeballs in space affect perception of motions in that space, both in terms of threshold and noise. That seems very directly related here, and if I remember and understand his theory correctly, it yields the additional prediction that the opposite motion condition will yield less overestimation than the same motion direction conditions yields underestimation, because a subtractive JND comes into play in both cases. The arrows in figure 1 are unclear to me, why is noise increasing three times along the flow parsing pathway? I’m struggling a little bit with the word predicted. At least, it is not motion that is being predicted, it is time to contact that is being predicted. The motion is known, albeit presumably misperceived, and supposedly held constant to yield the time to contact estimate. This held constant assumption is critical for your analysis. This is sloppy around the end of page 3, for instance, but occurs throughout. Perhaps the term extrapolation fits better here? Relatedly, p4, line 4: remove the word motion from motion prediction, then the sentence makes sense to me. P4 line 6, estimates, judgments may be a more appropriate word? Or percepts? P4, line 25 and other links: please print the actual link in the article text. This way I is preserved better, e.g. in case the article is printed. P4, line 35: “you”: rewrite P5: line 6-7: rather critical sentence is incomplete, what happens after occlusion duration? P5: why a gaussian velocity profile and not just a constant? This seems to me to complicate the situation. Assuming incomplete flow parsing, the object to be judged will also be seen to accelerate and decelerate (or vice versa) in the two movement conditions. What object speed is then used for the judgment, some kind of (weighted) average? Do participants receive instructions about head motion? What happens to their view of the virtual world when they move their head, does it counterrotate so that the virtual environment is perceived as rigid? Would head movement make your data harder to analyze / add additional unwanted variability to your study (e.g. it could conceivably differ between conditions, although I am not directly aware of studies suggesting that the lateral simulated self-motion will induce head motion). P5, line 23: is your task doable at very short durations (0.1 and 0.2 s especially)? I see that the ball casts a shadow on the ground. Is that a deliberate choice? Speed of the shadow over the tiles on the ground (relative speed between the two) is a direct cue to ball speed that could be used more straightforwardly than motion of the ball itself. Speed estimation: describe more clearly that its only the balls in the ball cloud that move, not the whole scene. Is 37 trials sufficient for a JND estimate? Simulation work by Prins on his Bayesian staircase suggested you need more like 100 or so, if I recall correctly. Are participants able to fixate the cross when it is so close to the rapidly moving ball cloud? Do you experience induced motion in the fixation cross during simulated self-motion? May the opposite direction of this induced motion confound your results? Do you use the same participants for the two tasks? Results such as Niehorster & Li 2017 suggest there may be wide variation between participants in how complete flow parsing is. That should give you ample variability between participants to do a strong test of correlation for you H3. P 7, line 18-22. Why you assume that flow parsing is incomplete only in the opposite motion condition, but not in the same motion condition? That needs to be justified. Same for the precision prediction. P 8, assumption atop the page: and you assume the distance is perceived correctly and is not affected by background motion. Can you justify this? If both perceived v and d may vary between conditions, you have a problem. Reviewer #2: The study aims to examine an interesting topic: how self-motion influences prediction when interacting with moving objects. The abstract mentions a clear prediction: that perceived object speed is likely to be biased and more variable during self-motion, because separating object motion from self-motion might give rise to systematic errors and must presumably give rise to more variability. The authors propose to study this by presenting virtual moving targets during simulated self-motion (or absence thereof). They examine judgments of target speed by having participants compare the speed of the target with that of a moving cloud of dots within a static window. They examine prediction by having participants press a button when a ball reaches a target. The ball is occluded before it reaches the target. The prediction mentioned in the abstract makes perfect sense, but I feel that it is a very weak prediction: that there will be a correlation between performance in the two tasks. Specifying what one expects to be correlated might change this. I think it is a bit trivial that performance on the two tasks across different speeds of self-motion is correlated, but maybe the authors are referring to correlations across participants within each value of self-motion. Otherwise, maybe it makes more sense to check whether the values are similar, rather than only whether they are correlated (as in de la Malla et al., 2018, Errors in interception can be predicted from errors in perception. Cortex 98, 49-59). I actually see a theoretical complication in interpreting the data. Since the self-motion presumably shifts the goal (the target rectangle) as much as it does the ball, why would you expect any bias in judging self-motion to influence the timing of the tap? I think this needs to be explained. Another issue that needs justification is the use of a fixation point. Apart from making the task quite unnatural, it also introduces many complications. First of all, how will fixation be ensured. It is very difficult to keep fixating while making judgments about moving targets, and small periods of pursuit at critical moments might influence one’s judgments. Secondly, the participants might make several of the judgments with respect to the fixation point. The fixation point does not move with the simulated self-motion, so its motion relative to the surrounding also needs to be interpreted. It also provides a reference in time for the button press: the time it took the ball to reach/cross fixation. At the very least the authors should explain why they have a fixation point, and how this might influence their results. I would consider not requiring fixation. When the authors write “the process of flow parsing should add noise and lead to object speed judgements being more variable during self-motion” they are actually making some assumptions. Although these assumptions are probably reasonable, I think the authors should be explicit about the details. Assuming that people use some kind of flow parsing mechanism to separate object motion from self-motion, they presumably also have to do so when there is no self-motion. Thus, the assumption is that speed judgments become more noisy when self-motion is faster, just as they become more noisy (at least in absolute terms; it could be a fixed Weber fraction) when the object moves faster. Being very explicit about the assumptions will help the reader follow the reasoning. It might also be worthwhile more explicitly considering the consequences of the visual self-motion information being in conflict with information from other sources. Following Figure 1, perceived self-motion should be weak because 3 of the 4 ‘senses’ of the multisensory integration indicate that there is no self-motion. There is no evident reason for an asymmetry between motion with or against the ball. I am also not so sure about this interpretation of ‘flow parsing’. Flow parsing refers to the ability to separate object motion from self-motion from the visual information alone. That is indeed necessary for the proposed processing, but I don’t think that a multisensory value of self-motion is normally considered as an input to flow parsing, so maybe the terminology should be adjusted here. Actually, many of the claims and assumptions do not appear to be necessary for answering the question as to whether biases in speed estimation give the anticipated errors in prediction, so probably the introduction (and methods) can be simplified. Moreover, the last pair of hypotheses are what the authors really want to test (I think). They need to check that their manipulation (simulated self-motion) influences judged object speed (and its variability) but actually they already know that it will. Hypothesis 2 is therefore a bit superfluous. They plan to examine whether motion prediction is also influenced (Hypothesis 1) and whether it is influenced in the same manner (hypothesis 3). If it is influenced in the same manner, it must be influenced, so hypothesis 1 is also superfluous. This gives a much clearer study with one hypothesis (with two components: bias and variability). There are also a number of things to consider in the methods. Especially if people will be tested at home, the authors might want to consider the extent to which participants are allowed to move their heads, and whether such head movements will be compensated for. Why are stereoblind participants excluded? Do the authors expect their performance to be different? Is it a good idea to always center the trajectory in front of the observer, especially when that position is indicated by a fixation point? Maybe the authors should consider adding some jitter to the position. Otherwise the task could be performed by pressing the button after the same time from when the ball reaches fixation as the time between the ball appearing and it reaching fixation. It appears to me from the video that the target disappears when the participant presses the button. Is that correct? This should be mentioned explicitly. Since the task is to press the button when the ball would hit the target, this task could be interpreted as judging the time of collision of two moving items, rather than in terms of self-motion. If the target’s motion is underestimated due to motion in the surrounding one might therefore find no effect even though the hypothesis is true. Is there some reason to exclude this possibility? Why was this velocity profile chosen for the self-motion? Not having a constant speed means that the response could be different for the two tasks simply because the moment that is considered relevant is different: for judging speed, presumably only the average speed is relevant, whereas for prediction the change in speed is presumably also relevant. I assume that the training on the prediction task was always with the observer static. This should be specified. Why is there no target in the speed estimation task (in the condition with a single ball)? Might this not influence the comparison? Nice instruction video! I assume you also have a version with the other order. The status of the assumptions in the predictions section is not quite clear. Some of the assumptions are predictions based on earlier findings, but if the current results turn out to be slightly different it is not a problem. For instance, if the speed of the ball is overestimated by 30% rather than 20% at this speed (or the Weber fraction is not 10%) the reasoning will still hold in the same manner. In the case of the variability it might even be a problem if the results were identical to the previous ones (no influence of self-motion). The third assumption is very philosophical. How would you know whether they have the same bias other than by comparing performance in the two tasks, which is what the study was planned to examine so it cannot be an assumption. The same is true for Equation 3. The Weber fraction of 5% for distance judgments is presumably really an assumption that must be considered when converting speed judgment uncertainty into temporal uncertainty using Equation 3. Maybe explain exactly how this is done and therefore how sensitive the result is to deviations from this value. In the motion prediction section I think it would be a good idea to clarify that certain predictions are based on earlier research, while others are based on reasoning. This might be important for the interpretation, because not finding the asymmetrical influence of background motion, for instance, need not affect the general conclusion, whereas not finding an increase in the standard deviation with the magnitude would make some of the proposed analysis meaningless. In Figure 5A I am guessing the y-axis should be in s, not m/s. Why do the authors anticipate precisely this relationship? I think the authors can be a bit more specific about the actual values. Presumably these duration values are obtained by multiplying the difference in PSE by the occlusion time, or something like that. I would be specific, because that is what makes pre-registration a powerful tool. Figure 5B also confused me. If the authors expect such a mess, why bother? I am not very familiar with the Wilkinson & Rogers notation, so I may be wrong, but it appears from Equation 4 that the authors assume linear, independent effects of observer motion, ball motion and occlusion duration. Why? Would you not for instance expect a larger effect of speed for a longer occlusion duration? Just under that equation the authors speak of biases in timing error. Do they mean systematic errors? This is not really a bias but a potential finding: that observer motion influences timing errors. What would not finding such an effect mean? Maybe the target position is shifted to the same extent as the ball, so their effects cancel? I see many potentially interesting issues to explore, but the idea of pre-registration is to precisely specify what you are testing. For this, I think the authors need to better specify which effect they expect and why. For equations 5 and 6 the measure is clear: all that matters is whether including the Motion Profile in the model provides a significant improvement. Finally, it seems that equation 9 is evaluating whether the judged speed influences the judged timing. Is this really what you want to know? Should you not be testing whether differences in judged speed can fully account for the differences in timing? By equations 10 and 11 you lost me completely. If the time difference and the JND difference are not independent this might give confusing results. In the power analysis I do not see any measure of the original assumed variability and effect size. Maybe I missed something. Reviewer #3: Overall, I believe that your study covers an interesting and important topic. It is well designed and the hypotheses are clear and well based on previous literature. I have some suggestions for improvement listed below. On Page 2, line 21 you mention that “in many virtual reality (VR) applications, vestibular and proprioceptive cues signal that the body is at rest, while the visual optic flow cues simultaneously indicate self-motion.” Could you provide some examples and also explain why this is the case just in some VR applications but not others (e.g. is it due to properties of the hardware or the virtual environment itself?). On Page 3 lines 11-19 there seem to be references missing for some of the statements you make. On Page 3, Line 24 you say “it would seem logical that the prediction reflects this bias in motion estimation”. I would like to see a more detailed explanation for this assumption since it is critical for your study. I find that entire paragraph containing explanations that are a bit rushed and unclear. I greatly appreciate the attention given to the participant sample size and counterbalance of gender and order of conditions. I am however not sure of the acceptance of any VR HMDs owned by participants. You mention that in-person testing would be conducted on a VIVE Pro Eye if granted permission, but it is expected that participants may possess different HMDs such as Quest 1 or 2, which have significantly different specifications and most importantly, interaction methods (e.g. controllers). Especially for time-sensitive stimuli that you present, it would be important to first conduct a pre-assessment of how your Unity code runs on these HMDs. I understand that due to COVID restrictions currently in place you would have to test remotely, but I believe more should be done to mitigate potential limitations arising from this. Perhaps one option would be to cap the framerate and field of view to certain parameters which are compatible with those HMDs that are lowest in terms of specifications that you would still accept in your study. For both tasks it is unclear how the speeds and sizes of the stimuli were determined. Was that based on previous literature? if so it should be mentioned or otherwise it should be based on piloting data. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
The Impact of Visually Simulated Self-Motion on Predicting Object Motion – A Registered Report Protocol PONE-D-21-37478R1 Dear Dr. Jörges, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements (note some of this text might not be well adapted for Registered Reports, so bear that in mind). All three reviewers replied with Accept, and I am also satisfied with the changes. Well done. Note two reviewers have included a few other suggestions you might find useful. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Michael J Proulx, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions? The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses? The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors described where all data underlying the findings will be made available when the study is complete? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics. You may also provide optional suggestions and comments to authors that they might find helpful in planning their study. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors have done a very thorough and good job responding to my comments. I am looking forward to seeing the results of the study. One further reference that may be of interest is https://jov.arvojournals.org/article.aspx?articleid=2770910 Reviewer #2: The purpose and methods are now much clearer. Most of my concerns clearly arose from having misunderstood details of the experiment. Many of the issues have been clarified sufficiently, but there are still a few small things that I believe will be helpful to clarify or motivate at some time. The time course of the trials is now much clearer to me, which indeed solves many of my issues. I think it would be useful to modify figure 2C (and possibly D) to better match the actual experiment. This figure was the origin of much of my confusion, because in it the whole trajectory is more or less centred on fixation, rather than only the visible part. Probably the authors should also add something to illustrate when the observer moved (as now illustrated for the ‘invisible’ time). The pattern of events is quite complex, so it is good to have a reference. That might also help understand Table 1 which I still fail to understand, even when trying to work back from what it might mean. Is this the ball’s speed relative to fixation? I do not understand why the difference between ball speeds for the static observer changes in this manner. Or is there a typo somewhere? Should the first value be 20.4? Since the fixation point moves with the participant (but does not actually move) the ball moves in the opposite direction due to self-motion. That explains why opposite directions increases ball speed, but why this strange pattern for same speed? I feel that I am still missing something. It might help to always specify relative to what the motion is measured or described, because it is not intuitive. For instance, simulated self-motion does not correspond with retinal motion, because the participant is fixating a point that is moving along with the participant and is therefore static on the screen. The ball speed ‘should’ (according to the reasoning in the paper) be judged relative to the world, rather than relative to the observer. Are participants aware of this (is it part of the instruction)? In the timing task it is obvious because the self-motion affects the target, but in the speed judgment task this is not self-evident. All this could be problematic for the further interpretation, but not for the parts that are based on the authors’ previous work. I think being even clearer about the task and stimuli will make it easier for the reader to follow the reasoning. Another issue that I had not always interpreted correctly is the role of the simulations and which parts of the methods are about simulations. I think it does make sense, but sometimes it is not clear to me whether the data presented in the figures are the outcome of the simulations, and sometimes it is not clear whether part of the analysis also applies to the simulations. I would try to clarify this. For instance, in the figure captions simply replace “Predicted data …” (Figure 3) by something like “Data from simulations based on …”. In the ‘data analysis plan’ indicate which parts (if any) also apply to the simulations. Maybe also change the order of some sections, because I think that the power analysis is based on the same (kind of) simulations as the predictions. Thinking logically I can guess what the authors did, but it is better to be told explicitly. Details: The order of the bars is incorrect in Figure 4: opposite directions in yellow (legend) but rightmost is same directions (caption). The power seems to be 0.75 for precision in speed estimation in Figure 6 (so less than 0.85). I would explicitly mention that the fixation cross is static on the screen when mentioning that it moves with the observer (page 5 line 36). It is obvious when you think of it, but at this stage in the paper the reader does not need to think of this so it is worth pointing it out. In the next sentence I would also add the word ‘lateral’: The target is presented at a lateral distance that depends on the speed of the ball … The study is quite complex so it helps to guide the reader a bit. The phrase “Given the on-going COVID-19 pandemic …” is probably no longer relevant. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No |
| Formally Accepted |
|
PONE-D-21-37478R1 The Impact of Visually Simulated Self-Motion on Predicting Object Motion – A Registered Report Protocol Dear Dr. Jörges: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Michael J Proulx Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .