Peer Review History
| Original SubmissionNovember 16, 2022 |
|---|
|
Dear Mr Liao, Thank you very much for submitting your manuscript "Unsupervised learning reveals interpretable latent representations for translucency perception" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations. Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Roland W. Fleming, PhD Academic Editor PLOS Computational Biology Thomas Serre Section Editor PLOS Computational Biology *********************** A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately: Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: This paper reports unsupervised learning of translucency. Overall, I find the paper quite thought provoking and the writing to have above average clarity. The stimuli are rectangular prisms or ellipsoids made from various forms of soap that strongly vary in translucency. My impression is that nearly all of the example surfaces shown in the paper’s figures appear to have strong back lighting. I think it is worth commenting in the main text about the distribution of illumination directions in the set of images used to train the network. The experiments that are particularly interesting manipulate the middle layers of the network, which appears to encode information strongly related to material appearance. Specifically, the results suggest that middle layer 9 encodes information about opacity and translucency. I find that result very interesting, and it is at this point in the paper that I desired the text to provide more detail: The basic idea is to interrogate what the network has learnt by editing the information in one layer and observing the effect on the synthesised image. Say, the network’s input image is a bar of opaque soap. All of the information needed to reproduce that image is held fixed except the information in the layer whose “function” is being interrogated. The information in that layer is then replaced with the same layer’s state when the input is a different surface, say a bar of translucent soap. The interesting result is that the network now synthesises an image of a surface that has the same 3D shape as the opaque soap but appears translucent. It is at this point in the text that I think it would be useful to comment on the extent of the differences between the two surfaces that are being “mixed together”. How different are the overall sizes of the surfaces, their 3D shapes, colors, and illumination contexts? The only other question I have is whether all the materials and code needed to reproduce this interesting result will be made available and easily accessible? Reviewer #2: SUMMARY A beautifully illustrated paper presenting a very successful technical achievement in training a generative DNN to learn the visual appearances of simple objects varying in shape, colour, and translucency. The network is trained using a custom dataset of real photographs (rather than the easier-to-collect rendered images more often used in similar work) and can generate images at an exceptionally high resolution (1024x1024 pixels). Three human behavioural experiments confirm that generated images are in many cases indistinguishable from real photographs (Expt 1), span a similar range of apparent material properties (Expt 2), and can be made to morph coherently along different appearance dimensions (Expt 3). MAIN COMMENTS I'm not sure how surprising is it that "human-understandable scene attributes emerge" in the latent code...this has been shown several times in several domains for GAN-based DNNs (e.g. with faces, chairs, landscapes), and is arguably necessarily true of any model that can smoothly interpolate a series of plausible objects morphing between two existing objects. Likewise with the analysis of spatial scales: it seems intuitive that small-scale layers contain information about colour (which is present within the RGB values of even small kernels), whereas coarse-scale layers specify shape (a whole-object attribute); does this tell us anything new? This makes me wonder whether the authors satisfy their stated goal of "discovering perceptually relevant image features" underlying translucency perception. The network does indeed seem to have learned these features in a summary code that the authors are able to navigate; but do we end up with a clearer description of *what* these features are than we had before? Specifically, it would be wonderful to provide evidence that the representation of translucency is "human-like", rather than (just) optically accurate. e.g. perhaps by testing for quirks of human translucency perception such as the lower apparent translucency of greyscale images. Having said that, I think the paper works as an inspiring demonstration of the potential value of carefully-curated generative networks in perception science and psychological research more broadly. The combination of network architectures (StyleGAN + a pixel-style-pixel encoder to project new images into its latent space) is well-chosen - they have been little-used in the perception science literature but solve the challenge of training on this limited dataset excellently and provide a navigable and interpretable latent space. Despite the very constrained application domain (a dataset of photographs of 60 individual soap objects), this in itself is a useful technical contribution to the field. The authors make reasonable and restrained interpretations of the results. They explicitly avoid claiming that either the architecture or learning process of the StyleGAN-based TAG network constitute good mechanistic models of human material perception. Instead they focus on the network's value as tool for data-driven discovery of complex image features that are candidates for forming the perceptual dimensions of material perception. I find this an admirably cautious and nuanced use of DNNs, and I'm sympathetic to their conclusion that "learning the scale-specific statistical structure of natural images might be crucial for...material properties". MINOR COMMENTS 1. I'm a bit skeptical of the claim made in the Discussion (around line 307) that models without a multiscale representation (e.g. DCGAN) aren't capable of capturing translucency. Previous work shows that GANs trained on rendered images can reasonably well caputure glass-like transparency: Tamura, Prokott & Fleming (2022) https://jov.arvojournals.org/article.aspx?articleid=2778652. And "Stable Diffusion" type models currently show an outstanding ability to render the nuances of materials, including translucency - do these also have explicit multi-scale latent codes? 2. Why ask both about "translucency" and about "see-throughness"; are these distinct concepts? The explanation given (that these were "found to be descriptive" of translucent objects in a previous study by Liao, Sawayama & Xiao (2022 JoV) doesn't fully explain it, since that study doesn't seem to have asked people to rate the two dimensions simultaneously, but rather asked for a binary translucency judgement followed by a continuous "see-throughness" judgement, and found these were closely related. TYPOS etc The paper generally reads very fluidly. There were just a few phrases that read oddly to me, e.g. from the first couple of pages: - line 35: "difference between raw and readily cooked food" - line 43: "...and in the mean time, humans may lack precise descriptions" - line 61: "Some recent works in perceptual system..." Reviewer #3: This paper presents an unsupervised approach that reveals a layered representation of perceived translucency, at least for the subset of cases included in the study. It was a pleasure to read, the authors motivate their decisions convincingly enough and summarize their main findings as they move along, which always helps the reader. I found the methodology solid, the write-up excellent, the figures well explained and informative. The perception of materials in general, and translucency in particular, is an open topic spanning different fields of research, from neuroscience to computer graphics and even robotics. I believe the paper makes a useful contribution in this regard. I found the resulting layered space intriguing, and it seems to work well. I don't usually review nor read papers in this journal, so it may be that I'm miscalibrated, but all in all I think this is a very good paper. I only have a few questions, comments and suggestions. Fig. 9 in the supplemental is used by the authors to illustrate how other architectures (DCGAN) fail to generate good results. However, other than the shape, the results (for the most part) seem to actually be pretty decent regarding translucency. It'd be great if the authors could offer more insights about this. The user study at the beginning (summarized in Fig. 2): Isn't one second too short to view the images? I wonder if the results would've varied giving the users a bit more time to analyzed the images in a more relaxed way. If possible, extending the test to maybe 5 seconds would be useful (maybe with a randomized subset of images not to make it too long) As with most papers, the results and conclusions are strictly speaking only valid for the particular subset of examples used. In that sense, and this is really my only "major" concern, some parts of the paper including the title may be a bit overstated. The authors have used only one class of translucent materials only, with basic geometries and one particular, strongly angled lighting (to emphasize the translucency effect). How do the results generalize to other setups? This would probably need to be discussed at the end of the paper, possibly showing some failure examples along with some insights as to why they may occur. A couple of relevant papers on the topic of discovering perceptual spaces for materials could be added: Pellacini et al. Toward a psychophysically-based light relfection model for image synthesis. SIGGRAPH 2000 Serrano et al. An intuitive control space for material appearance. ACM Transactions on Graphics, SIGGRAPH Asia 2016 ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: None Reviewer #2: Yes Reviewer #3: No: Maybe they did. Sorry, but I had to review the paper on a train and couldn't check. ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Katherine Storrs Reviewer #3: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols References: Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. |
| Revision 1 |
|
Dear Mr Liao, We are pleased to inform you that your manuscript 'Unsupervised learning reveals interpretable latent representations for translucency perception' has been provisionally accepted for publication in PLOS Computational Biology. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. Best regards, Roland W. Fleming, PhD Academic Editor PLOS Computational Biology Thomas Serre Section Editor PLOS Computational Biology *********************************************************** |
| Formally Accepted |
|
PCOMPBIOL-D-22-01676R1 Unsupervised learning reveals interpretable latent representations for translucency perception Dear Dr Liao, I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards, Anita Estes PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .