Post a new comment on this article
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Why should this posting be reviewed?
See also Guidelines for Comments and Corrections.
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.close
Problems with Fig.4
Posted by 21 Dec 2014 at 18:39 GMTon
[PLOS ONE staff: This thread was initiated by a user whose identity we have been unable to verify, and who appears to be using a pseudonym. Our commenting guidelines require that users unambiguously identify themselves, however given that the comments discuss different items related to the scientific content of the article we feel that it is useful to readers to retain these posts.]
Could the authors confirm that in their Fig.4 the full width at half-maximum (grey area) is for the entire full images in the Stellacci paper in their ref. 24? Those images contain a large proportion of substrate, so the grey area would mostly represent the noise of the substrate's features, which are tip speed dependent, rather than the noise of the features on the nanoparticles, which should be tip speed independent.
Actually, that the grey area and the red dots follow each other with tip speed, while the blue and green dots are essentially tip speed independent, indicates that the authors are comparing different noises, which renders their point about the ripple features not being above noise level invalid.
The right width of the grey area for the figure would be that which corresponds to the noise arising from the nanoparticle surfaces, not from the whole images, whose noise is dominated by the substrate.
RE: Problems with Fig.4
21 Dec 2014 at 22:03 GMTreplied to on
Thank you for your interest in our paper. The paper clearly states that the FWHM is from the entire image. Let me summarise the text of this section.
The blue and green dots are from Stellacci's group hand selecting particles (and measuring the spacings by eye), the red was them by hand selecting "noise" spacings from the background feedback ringing. The issue here is that measuring spacings by eye is very open to interpretation. The spacings, measured by eye, have only 2-4 pixels in the uninterpolated images this makes it near impossible to measure them accurately (In fact in their analysis some of their assigned errorbars were 0.026 pixels which is clearly ludicrous!). Also as the nanoparticles have so few pixels we cannot get a good reading from a Fourier Transform of just the nanoparticle.
The idea behind Figure 4 of our paper is that if we take a Fourier Transform of the whole image we see a large broad peak for the background rippling. Spacings within this peak are in the whole image. Now we see that all of the green and blue points are inside this background. Thus, it is possible to find spacings corresponding to these spacings just from the background ringing. As the reported points were hand chosen rather than rigorously picked with an unbiased metric this does not count a strong evidence that the ripple spacings are actually speed independent, as these frequencies are present in the whole image.
Figure 4 also shows that the hand picking analysis was bad at determining the background ripple spacings as their values are consistently too high. With one determined spacing being outside the FWHM.
Or in summary, Figure 4 shows that their measuring of the background ripple spacing (or "noise") was inaccurate as were their assigned errorbars. When calculating the background ripple spacing rigorously in Fourier space it becomes clear that all of their measured NP spacings fall within the background.
RE: RE: Problems with Fig.4
21 Dec 2014 at 23:28 GMTreplied to on
Thanks for your quick and clarifying reply.
You seem to be making the assumption that the background noise is rather homogenous for the images, and therefore that the signal (the measured spacings or frequencies, the blue and green symbols) falls within the spatial frequency of the background. However, this is not the case. You surely know that in STM imaging, quick changes in tip height, as occurs in curvy parts of a sample, lead to much higher noise, and flat parts of a sample can be imaged with lower background noise. Comparing the regions for which the noise level should be low to regions where noise levels are higher doesn't lead to much useful information.
The spatial frequencies of the blue and green dots in the image should be compared to those of the background noise for the regions corresponding to the blue and green dots. But that's not what you did.
So the conclusion that can be taken from your graph is that the measured spacings for the NP ripples fall within the much noisier background from elsewhere in the image. This is expected, and not a meaningful comparison.
You should compare the spacings for the NP ripples to the noise on the same NPs. I understand that you may not have enough data to do that accurately, but you should see that the noise doesn't change with tip speed and thus ratify the findings of Stellacci.
RE: RE: RE: Problems with Fig.4
22 Dec 2014 at 07:03 GMTreplied to on
There are a few issues here.
First the term noise does not properly describe the background ripples which is why I put it in quotes in my comment. The ripples come from feedback ringing. Feedback ringing comes from the characteristics of the feedback loop, the ringing will be stronger over the more curved surfaces as the sudden changes in height sets the ringing off (which is why we do step analysis for control systems), but the ripples do have the same source. This can clearly be seen in my simulations in Figure 2. It can also be seen between the nanoparicles in Figure 1(b) Stellacci's raw data from 2004 (without the contrast saturation) and in the larger images of Figure 3.
Secondly there seems to be a contradiction in this comment first you say "quick changes in tip height, as occurs in curvy parts of a sample, lead to much higher noise", and then "measured spacings for the NP ripples fall within the much noisier background from elsewhere in the image". The nanoparticles are more curved than the background, so by your first statement they should be more noisy but then by the second they sit in a much noiser background.
Back to the data. Taking the blue points and returning from spatial frequency into real space, but units of pixels not nm as this gives an idea of the resolution. The spacing measured by eye are: 3.05 pixels, 3.53 pixels, 3.56 pixels, 3.46 pixels, and 3.62 pixels. Quite clearly we are limited by the pixel resolution so it is pretty much impossible to know what to do with this. Are these really even constant? They do have a slight rise. Lets compare this to the pixel spacings of the background from the Fourier space analysis: 2.63 pixels, 3.57 pixels, 3.45 pixels, 3.83 pixels, and 3.81 pixels. The difference between the supposed unchanging spacings and the background never differ by more than half a pixel. This is hardly convincing evidence that the blue point area has speed independent spacings.
Let's assume we were to do the analysis you suggest. First we would have to hunt down the correct nanoparticles in the huge images as the images shown in the paper are just crops of much larger scans. Hunting these down is very time consuming. So lets just do the some of the maths of how the result would turn out. We know the nanoparticles chosen are about 10nm across, this would make them 26 pixels wide. Let's round up to 30 pixels to be even fairer and then up to 31 because its simpler to do the frequency axis. If we did Fourier transform the 31 pixel section and then converted the frequency axis back into real space we would have three points between 3 and 4 pixels: 3.00, 3.33 and 3.75. There is no more information in the image to get a better resolution. Which comes to the underlying problem, the data collected simply is not good enough to observe the effect it claims to show.
I wish to reiterate my central point. Our analysis shows that the spacings hand chosen by Stellacci's group are consistent with the background ripple spacings. This doesn't mean it is impossible that features of this size exist. We never claimed that. Our claim is simply that the data is consistent with the hypothesis that these ripples are from the same source as the background ripples.
RE: RE: RE: RE: Problems with Fig.4
23 Dec 2014 at 00:40 GMTreplied to on
Thank you for the detailed explanations.
As you guessed, by "noise" I meant feedback ringing, which should be more pronounced when the tip encounters sudden changes in curvature, for example next to the edges of the nanoparticles. The curvature on the top regions of the nanoparticles should change more smoothly. Elsewhere, changes in height coming from irregularities on the substrate can also lead to higher noise.
Your pixel analysis is interesting, but once more you are comparing the noise of the full image with the signal from the particles, i.e., your black and blue dots in Fig.4. If you compare the pixel spacings for the blue and red dots, their difference should be significant, as your Fig.4 shows. And as if you say the image of a nanoparticle 10 nm wide comprises ~30 pixels, then the ripple spacings reported by Stellacci, ~1.4 nm +/- 0.4 nm, correspond to 4 +/- 1 pixels. I would say these are enough pixels to define the spacing.
What you call "background ripple spacings" seem to be a mash-up of heterogeneous noise, with an expected broad range of frequencies, coming from the entire image. The spacings given by Stellacci should be compared to noise on the nanoparticle regions only and this noise should be expected to be much smaller.
Comparing the signal coming from a low-noise area to high noise elsewhere in the image is not a meaningful comparison. This is the mistake in Fig.4.
RE: RE: RE: RE: RE: Problems with Fig.4
23 Dec 2014 at 19:19 GMTreplied to on
We are going around in circles here. You admit it is not noise, yet you still incorrectly refer to it as noise. Also you still say the "noise" will be smaller on the nanoparticles, despite ringing being higher on curved surfaces and the nanoparticles being more curved than the substrate.
I will repeat my same point one last time in another way, step by step:
1. If the features on the nanoparticles came from ringing they would be of the same frequency as the ringing in the rest of the image.
2 . We used Fourier analysis to look at the frequency of the ringing in the whole image.
3. From this we show that the spacings chosen by hand and measured by eye fall are inside this background ringing.
4. We conclude that these features are consistent with ones which arise from ringing.
The problem is not with our analysis but with the data. Why didn't they reduce the gains so that the flat surface didn't ring? That is how everyone else uses the microscope. Why didn't they zoom in to get a decent number of pixels per ripple if their plan was to measure the ripples?
RE: RE: RE: RE: RE: RE: Problems with Fig.4
23 Dec 2014 at 21:17 GMTreplied to on
Thanks again for replying.
There may be feedback ringing in some parts of the sample, where there are sudden changes in height for example. But your assumption that all what you see in the figure comes from feedback ringing leads you to the obvious conclusion, that the features fit within the broad range of frequencies coming from background ringing. Essentially, what you put in you get out.
I don't know how to explain it more clearly: you have to compare the spacing on the particles with the frequency background for the particles, not for other regions in the image where the noise (be it background ringing or other sources) is surely higher.
According to your analysis, there are enough pixels to determine the feature spacings on the nanoparticles within 1 pixel of error, and these do not change with tip speed, which is an indication that there is likely no significant background noise when scanning the top of the nanoparticles.
Don't know why the authors did not reduce the gains or zoom in more. I guess they had to play with the constraints that lead to the best quality images they could get. These seem to be difficult experiments. If you think it is straightforward to tune the parameters of the STM microscope to get great quality images of these nanofeatures on curved surfaces, why haven't you done it yourself?
I am sorry to have to say that your Fig. 4 doesn't provide useful information and is misleading, but it is now clear that this is the case.
RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
23 Dec 2014 at 22:54 GMTreplied to on
One thing I neglected to mention was that our analysis as on the error signal (current image). So it is heavily biased towards ringing. The grey area shows where the ringing was concentrated. The features on the nanoparticles are in this region, thus they are consistent with features from ringing. I am at a loss to what you find misleading/incorrect is it:
1. That the ringing covers the entire grey range
2. The frequencies measures on the nanoparticles fall within the ringing range
3. As the features are in this range they are consistent with ringing features.
1, 2 or 3?
You say that by my pixel analysis they are all constant to within a pixel. Well let's round to one pixel Stellacci's measurements for the "constant" spacings: 3, 4, 4, 3, 4 pixels. Now lets compare this to the Fourier transform of the background ringing: 3, 4, 3, 4, 4. This is hardly convincing evidence that one is rising and one is not.
As for the comment on why didn't we measure them. We asked for the particles and were never sent them. But this is not the point. If the gains are so high the microscope can't even track the surface without ringing, it will clearly ring when it goes over the curvature of the nanoparticle. Just look at the simulations in Figure 2. How are we meant to inspect features at this scale when it is ringing at the same spatial frequency to within a pixel?
The argument that it is hard to see nanofeatures and that the only images where they see them are images where the whole image shows ringing is completely consistent with the hypothesis that there are no nanofeatures and you are just seeing ringing.
Out of curiosity Gustav what is your field? I ran a quick google search and couldn't find anyone in the world with the surname Dhror?
RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
24 Dec 2014 at 02:33 GMTreplied to on
Thanks again for your quick reply.
What I find inconsistent is that your whole analysis in Fig.4 assumes that there is background ringing on the scanned surface of the nanoparticles. Under this assumption, you arrive at the obvious conclusion, that no signal can be discerned when there is noise with a broad spectrum of frequencies.
Because this assumption is not clearly stated in the paper, your figure may be misleading to the readers.
But the assumption may be inappropriate, as Stellacci showed that on the nanoparticles the features do not change with tip speed whereas noise does, as your Fig.4 also shows.
My field is in SAMs and also metal nanoparticles.
RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
24 Dec 2014 at 05:30 GMTreplied to on
First of all we clearly state what we did in the paper. We also then have a whole section in the supplementary information on how the figure was generated, and further have placed online all of the data and the code used to generate Figure 4. I fail to see how we could have been more open with what we did.
Secondly there is no assumption. The images are mostly substrate, we use the current channel so the feedback ringing dominates, we look at the frequencies and see if it is consistent with those measured by Stellacci for the nanoparticles. We see one large peak and his nanoparticle measurements are also inside that peak. We the then reported this as Figure 4.
You admit it is ringing on the flatter substrate? You also have agreed that curved surfaces cause the feedback to ring more. How is it possible that there could be no ringing on the much more curved nanoparticles?!
To say "Stellacci showed that on the nanoparticles the features do not change with tip speed whereas noise does" is simply bizarre. This is 'exactly' the analysis we are criticising in Figure 4. We clearly present his data along with our new analysis. How can you claim we are misrepresenting it when we plotted it all? His measurements are unreliable as they are hand picked, so we used the correct mathematical tools to show that everything he claims to measure and sits within the frequencies from the ringing.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
24 Dec 2014 at 10:09 GMTreplied to on
Thanks again for your comment.
You have indeed been open about what you did, but it is not clear in the paper that for Fig.4 you worked with the assumption that there is feedback ringing when imaging the surface of the nanoparticles and that this ringing is that of the average of the full image, which is mostly substrate. Thus, this assumption is not justified. The curvature of the top of the nanoparticles changes smoothly. On the substrate, there can be sudden steps and other sort of rugosity.
Handpicked measurements are fine if the error bars are given. We already discussed the number of pixels in the comments above.
You used the correct mathematical tools to arrive at the obvious conclusion from your assumption. As noted from the first post, you have to compare the signal on the nanoparticles with the noise on the nanoparticles, not the noise on the substrate.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
26 Dec 2014 at 22:20 GMTreplied to on
The responses from GDhror to Julian above are rather similar in style to those which we encountered from an unregistered commenter at the PubPeer thread for the 'preprint'/arXiv version of this paper. I am not suggesting that GDhror and the unregistered PubPeer contributor are one and the same, but it is interesting that the same type of issue arises: a comment/question is raised; we respond;our response is ignored... and precisely the same comment/question is raised again.
Julian has carefully addressed the criticisms of GDhror. A key element of the debate which GDhror has thus far neglected to mention is that when Stellacci collaborated with other STM groups who set up the measurement correctly (as discussed at length over at the PubPeer thread), they could not resolve stripes. The image at the link below is a comparison of ostensibly striped nanoparticles with a control ("non-striped") sample. I leave it up to the reader to ascertain which is which.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
27 Dec 2014 at 00:38 GMTreplied to on
Thank you for your comment, and for bringing the pubpeer thread to my attention. It is indeed really interesting that this issue had been raised before!
I have to disagree that Julian has addressed my criticisms. He has actually ratified the problem with Fig.4.
The assumption of feedback ringing on the nanoparticles is not clearly stated in the paper. Once one takes this assumption, the conclusion is trivial and of little use.
It is in some way "funny" that in pubpeer you dodged the same criticism. From your comments there, it seems that your line of defense is to argue that whoever criticizes your work ignores your careful and proper responses. Has it crossed your mind that perhaps your responses haven't been as careful and proper?
By the way, according to some pubpeer comments, your assertion that other STM groups have not resolved stripes is not shared.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
27 Dec 2014 at 03:16 GMTreplied to on
I think we'll just have to agree to disagree as to who "dodged criticism" at PubPeer. We spent a great deal of time and effort at PubPeer addressing comments clearly and comprehensively, as was recognised by many of the contributors there.
There is a very simple and fundamental issue with the data and the associated analysis (by Stellacci et al) which we critique in relation to Fig. 4. At no point did Stellacci et al. consider the most basic "ground rule" in scanning probe microscopy, namely: "If I measure the *same* area with different scan speeds, do I see the same features?" They didn't do repeat measurements on the same area when they could very easily have done this. This is STM 101.
Moreover, from the archive of data which Francesco Stellacci provided, it is clear that the images were of exceptionally poor quality *and* that the data were analysed using offline zooms (into noise). See http://www.nottingham.ac.... . I've said it before, but the conclusions which were drawn from these images would not pass muster in an undergraduate project report.
"By the way, according to some pubpeer comments, your assertion that other STM groups have not resolved stripes is not shared."
...and according to the referees of our paper we were entirely justified in pointing out that stripes had not been resolved. But there's no need to proceed on the basis of arguments from authority. The evidence is right there in a recent paper from Stellacci et al. In my previous comment I provided a link to a comparison of an ostensibly striped nanoparticle sample to that of a non-striped control sample. Look at the comparison of the images in that link...
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
30 Dec 2014 at 22:16 GMTreplied to on
Thank you for answering.
I first decided that it was not worthy to continue the discussion, as we have both made our points, but I can't help it to point out a comment of yours that anyone can see it's false:
"They didn't do repeat measurements on the same area when they could very easily have done this. This is STM 101."
Of course they have done! Anyone can look at the paper cited in ref. 24 of your article (the paper from Jackson et al.). Fig. 3 shows images of the same area at different scan speeds.
I don't know where the cropped images from your link come from. But you should point out at the various papers from Stellacci's group where nanoparticle features are compared to controls (Biscarini et al, for example). Hand picking a couple of images and stripping them out from context doesn't have the hallmarks of a strong argument.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
01 Jan 2015 at 12:24 GMTreplied to on
OK, fair point. I should have been clearer - apologies. Let me clarify...
First, it's important to note that those images in Fig. 3 of Ref. 24 are of exceptionally poor quality.
Second, the images are offline *zooms* of a region -- they are not the original data. As you pointed out, I said "They didn't do repeat measurements on the same area when they could very easily have done this. This is STM 101." What I meant was that the area should have been scanned under identical conditions (other than scan speed) -- i.e. gains, pixel density etc -- and the original, uninterpolated data used. Offline zooms "after the event" again violates STM 101.
Third, the "spacings" are quoted to sub-Angstrom resolution. For all of the reasons we describe in our paper, this is simply wrong. No-one can measure those spacings on the basis of those images (and the extreme interpolation) and get a 10 pm error bar. It's a physical impossibility.
Fourth, the contrast in the images is saturated (Fig. 4 is even worse for this - completely saturated contrast *and* huge level of interpolation).
Fifth, the authors don't show line profiles to indicate just how they ascertained the "spacing" of the features.
I could go on, but I hope that clarifies matters.
Happy New Year.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
01 Jan 2015 at 12:31 GMTreplied to on
Sorry, forgot to respond to point about comparison of images. The image is taken *directly* from this paper -- http://pubs.rsc.org/en/co... (i.e. Ong et al. Chem. Comm. 2014). I haven't modified it in any way -- no cropping.
There's no taking it out of context. The authors clearly state that one of those images shows stripes whereas the other doesn't. Please read the paper and then get back to me if you think I've unfairly represented the authors' conclusions or their data. As I say, the images are taken directly from the paper and not modified in any way at all.
Does this mean that you can't ascertain which of the images shows stripes...?
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
18 Jan 2015 at 14:00 GMTreplied to on
I have had a look at the paper
and I can clearly tell that you misrepresented the authors' conclusions.
BOTH images show stripes, but with different spacings. In the top image (homoligand nanoparticles), the spacings are about 0.7 nm while for the bottom image (nanoparticles with mixed ligands) the spacings are approximately 1 nm. The authors show a thorough analysis of these spacings using various methods.
I am sorry to say that there is a rather obvious biased trend in your comments, and that this misrepresentation of other peoples' findings did not only occur in this thread, unfortunately.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
19 Jan 2015 at 11:37 GMTreplied to on
Stellacci et al.'s claim right from the start (and throughout) this work is that striped domains form on particles terminated with two different types of ligand, and that stripes do not form on particles with a single ligand type.
Note that they describe NP1 as a control sample! In what sense is it a control if it also shows stripes? I must admit to getting rather tired of being accused of misrepresenting data/arguments when no such misrepresentation has taken place.
Please explain to me why Ong et al. refer to NP1 as a control, if stripes are also expected for that sample.
Moreover, we again have the issue of error bars to deal with. They are absolutely critical when it comes to Stellacci et al.'s arguments re. feature spacing. Having not been able to reproduce the striking images of stripes in their earlier work (see the "Stripes across the ages" figure in this: https://raphazlab.wordpre...) -- because the more recent measurements (from ~ 2012) avoid the ringing artefacts we discuss in our paper -- Stellacci et al. now have to resort to arguments based on feature spacing.
But if we are to compare a value of 0.7 nm with a value of 1.0 nm **we have to know the error bar**. (Otherwise our measurements are, to quote Pauli somewhat out of context, "not even wrong"). Look at the breadth of the shoulder associated with NP2 (or indeed NP1) in the power spectral density plots shown in Fig. 3 of that Ong et al. paper. What do you think this tells you about the ability to be able to robustly compare a value of 0.7 nm with a value of 1 nm? *Ong et al.'s own error analysis* shows that the values for the ostensibly striped and the non-striped particles, **agree within error**! But you don't have to even do the error analysis -- you can clearly see from Fig. 3 that the spacings for the control sample and the ostensibly 'striped' sample are indistinguishable.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
22 Jan 2015 at 01:22 GMTreplied to on
Thank you for your response. I have to say that I am really surprised at your seemingly uncanny ability to twist arguments to fit your thinking.
The paper clearly mentions that "NPs present stripe-like domains in both cases". Your comment above ("The authors clearly state that one of those images shows stripes whereas the other doesn't.") is therefore not true.
As for the spacings, anyone looking at Fig.3 can see the differences in the mean and the error bars between NP1 (homoligand or control) and NP2 (mixed ligands). The PSDs in Fig.3b show a shift of the shoulder. The PSD fitting show the spacings, whose difference is statistically significant. This is statistics 101. You can also look at the topography images of NP1 (Fig.S7) and NP2 (Fig.1). The differences in spacing are obvious.
On the basis of the evidence in the paper, your sentence, "the spacings for the control sample and the ostensibly 'striped' sample are indistinguishable" are not tenable.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
27 Jan 2015 at 04:46 GMTreplied to on
For some reason, I am not getting alerts when a comment is posted so I missed your reply until now.
As I have said elsewhere in the Comments section of this paper (in response to Wei Chen), there is absolutely no misrepresentation or twisting of arguments in what I have stated. I have read Stellacci et al.'s papers in great detail. I kindly suggest that you do the same before making unfounded accusations.
It beggars belief that your (and Stellacci et al.'s) argument is that the homoligand-terminated `control' particles are now meant to also show stripes, when the very clear thesis throughout Stellacci et al's work is that only mixed ligand-terminated particles show stripes. This sampling of just a few of Stellacci et al.'s papers makes the point very clearly indeed: http://www.nottingham.ac....
To now argue that the 'control' sample is also meant to show stripes is beyond disingenuous.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
18 Jan 2015 at 13:48 GMTreplied to on
Sorry for the delay in answering.
I have to disagree with you. This paper we are referring to is from 2006. Imaging curved topographies on nanoparticles with typical STMs available then was really challenging. One should avoid offline zooms, but they are not really a problem if that makes some things easier, like measuring spacings.
Then the spacings are quoted with proper error bars of a few nm, not Angstroms (Your "physical impossibility" comment is just ludicrous!). You can also get the numbers yourself. I did it by printing zoomed in versions of the images of Fig.3 and using a ruler, and got the numbers quoted.
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
19 Jan 2015 at 11:18 GMTreplied to on
The issue is the **error bars**. (And, just to be clear, I mean error in the sense of experimental uncertainty, not a mistake...). Of course you can just measure the values with a ruler -- this is precisely what you should do (and what we did here: https://raphazlab.wordpre... finding that the measurements quoted were incorrect).
But, as is discussed at length in 1st year physics laboratory modules, the ruler has an intrinsic error bar associated with it.
The point is that the paper in question quotes measurements to three significant figures (where the third sig. fig. is at the 10 pm level) *in the images* shown in Fig. 3. Each image is a highly interpolated offline zoom of a *much* larger image and where the intrinsic width and noisiness of the features in this case makes locating the 'peak-to-peak' separation (such as it is) very susceptible to observer bias. The 'accuracy' quoted in that paper (i.e. quoting a measurement for an individual image to the tens of picometre level) represents a few *hundredths* of a pixel in the original image! The Nyquist limit is 2 pixels.
So, my statement about the physical impossibility of measuring 10 pm spacings from the original image is far from ludicrous. Unless, of course, that you're suggesting that the work of Stellacci et al. somehow manages to violate basic physical and mathematical principles such as Shannon's sampling theorem?
The level of interpolation is clear to see from the images in Fig. 3 themselves - they are entirely "washed out" due to the lack of high frequency Fourier components (resulting from the interpolation process).
It's very easy indeed to massage an image until it gives you the features you want. Here's an example where we took images of entirely unfunctionalised particles and then used extreme offline zooms and interpolation, just like Jackson et al., to produce stripes: https://raphazlab.wordpre...
RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: Problems with Fig.4
19 Jan 2015 at 11:50 GMTreplied to on
One final thing, because it's an entirely spurious point that's been made before a number of times (not least by an unregistered peer in the PubPeer thread for the preprint version of this paper).
Yes, imaging features on curved nanoparticle surfaces with STM is tricky and prone to artefacts.Therefore, it is important for a researcher to do their utmost to avoid those artefacts (!) instead of, as Jackson et al do, setting up their experiment to deliberately enhance the artefacts. I made a very simple point above, which you ignored. (The same point was also made by one of the referees of our paper -- see link below). Why set up an STM experiment to look for stripes on nanoparticles by setting the loops gains so high that imaging of the background substrate produces ringing? Why not turn the gains down to the point where there is no ringing on the substrate and then look for stripes...? The argument that "Oh, it's difficult to scan nanoparticles" is no excuse for sloppy imaging protocols. Quite the opposite.
Moreover, it was no more difficult to scan these particles in 2006 than it is now (or, indeed, in 1996). The problem lies not in any deficiency in the instrumentation. **It lies in the experimental methodology**, which, as one of the referees of our paper pointed out (again, see link below), contains many fundamental flaws. In fact, that referee recommends the work as an example of the pitfalls to avoid when doing STM measurements.
Link to referees' reports on the paper:https://raphazlab.wordpre...
Gustav and PloS One commenting policies
05 Jan 2015 at 10:05 GMTreplied to on
I don't think that this thread is consistent with PloS One commenting policies:
3. Who Can Contribute?
All registered users are able to add Comments to any article. Anyone can register as a user. Users are required to unambiguously identify themselves with their first and last names, their geographic location, and a valid e-mail address in order to register. First and last name and geographic location are made public, however e-mail addresses are private. The e-mail address and other registration fields that the user chooses to make private will be kept strictly confidential by PLOS staff unless otherwise indicated. Discussion and ratings are not anonymous. PLOS reserves the right to suspend the privileges of any registered user. Any registered user who is found to have provided false name or location information will have their account suspended and any postings deleted.
RE: Gustav and PloS One commenting policies
02 Feb 2015 at 12:18 GMTreplied to on
All contributions via the PLOS commenting feature must conform to our guidelines; we reserve the right to remove comments that do not meet these standards. PLOS ONE staff have posted a reminder about the PLOS Guidelines for Comments here: http://www.plosone.org/an...