Protein nanobarcodes enable single-step multiplexed fluorescence imaging

Multiplexed cellular imaging typically relies on the sequential application of detection probes, as antibodies or DNA barcodes, which is complex and time-consuming. To address this, we developed here protein nanobarcodes, composed of combinations of epitopes recognized by specific sets of nanobodies. The nanobarcodes are read in a single imaging step, relying on nanobodies conjugated to distinct fluorophores, which enables a precise analysis of large numbers of protein combinations. Fluorescence images from nanobarcodes were used as input images for a deep neural network, which was able to identify proteins with high precision. We thus present an efficient and straightforward protein identification method, which is applicable to relatively complex biological assays. We demonstrate this by a multicell competition assay, in which we successfully used our nanobarcoded proteins together with neurexin and neuroligin isoforms, thereby testing the preferred binding combinations of multiple isoforms, in parallel.


Replies to the Reviewer comments
Reviewer #1: I have read the authors' responses and the revised manuscript with a lot of interest.I am particularly impressed with the authors' thorough implementation of functional assays to test the effect of their nanobarcodes on the different proteins they image.I would like to thank the reviewers for their extensive work on the revised manuscript -which must have been extensive -but, I can attest now that the revised manuscript is much better poised for publication and that their technology will have a profound impact in the field.
We thank the Reviewer for his/her comments.

Reviewer #3:
The authors have carried out an in-depth revision of the manuscript.They have addressed most of the reviewers' comments and have greatly improved the description of the analysis method and the accessibility of their code.The Github repository where the code is shared is well documented.The notebooks are easy to use and run smoothly when following the set-up described on Github.The code is well documented, commented, and clearly partitioned into functions.The functions and their parameters correspond to what is described in the manuscript and the methods section.
We thank the Reviewer for considering our code with careful scrutiny, and are glad that the Reviewer found it convenient to use.

Major comment :
The effect of the spatial resolution on the reliability of the nanobarcoding approach and deep learning based analysis should be better demonstrated and described in the result section.A very short paragraph describing the aspect of having only one nanobarcode per pixel was included in the discussion but I think that this should be described with more details already with the presented results.For example, in the paragraph describing the results about the Nrxn and Nlgn interactions, it is not clear to me how pairs are identified and what is considered as being an interaction.
The interactions measured here are indeed between Nrxn and Nlgn.However, these molecules are not fluorescently labeled and their binding to each other is not visualized.What we actually visualize fluorescently is the identity of the respective pairs of cells.These cells are labeled by different nanobarcoded proteins.
If I understand it correctly, only a single nanobarcode can be associated to one pixel, so that the spatial resolution of the microscope and the distance between interacting proteins will influence the precision of the analysis approach.Was this controlled for the Nrxn and Nlgn pairs?How would the reliability of the technique be influenced by interacting proteins that are closer that the resolution limit of the chosen microscope?
As explained above, it is the cells that are visualized and not the Nrxns and Nlgns.The cells are far above the diffraction limit of our microscope.Interacting proteins that are closer than the resolution of the microscope would lead to a mixed identity for the respective pixel.Therefore, subcellular investigations using nanobarcodes should aim for higher resolution, as we indicated in the paper (page 12, lines 287-292).

Notebooks Both notebooks fail to load the network weights on a CPU machine. I suggest adding an 'if torch.cuda.is_available' statement when loading the weights. When False, the torch.load function should have the argument 'map_location=torch.device('cpu')'.
We thank the reviewer for bringing this issue to our attention.We modified the call to torch.loadfunction to automatically map to the correct device.The fixed version of the notebook is already pushed to the github repository.
When using the environment as provided, I get the error message "Error displaying widget" when running the last cells of both notebooks.Make sure the environment includes the libraries for displaying the widgets, if the widgets are necessary.
This has been due to a known compatibility issue between the notebook widget tqdm and the recent Jupyter update.We rolled back to using the text-based progress-bar of tqdm which circumvents the issue.The fixed version of the code is already pushed to the github repository.

Minor comments:
In Fig. 2a, it would be more helpful to have a less simplified version of the designed neural network, highlighting the principal structural elements of the network.
We agree with the Reviewer and have now added more structural details of the network to Figure 2A.We have already provided all the details in Supplementary Figure 18.We enhanced the contrast of the merged input images and replaced the images from the left column in Fig. 2B.Fig. 3b : please describe the red boxes in the legend and make sure that they are useful (it is not clear to me with this version of the figure why they have been added to the right panel).
Since Fig. 3B does not contain red boxes, we think the Reviewer is referring to Fig. 4B (previously Fig. 3).The boxes have been present in the previous version of the manuscript.For our revised manuscript we changed the dashed lines of the red boxes into solid lines, to make the boxes more apparent.We now added the following description to the legend of panel B: "(red boxes depict typical cell contacts)"

Fig. 3c : define the green and magenta colors in the legend
The green and magenta colors are already defined in the legend of

Fig
Fig.2bin the merged input column, the contrast is very low, making it very difficult to compare the signal with the NN output.
Fig. 4C: "Nanobarcode-proteins are shown in green (anti-ALFA-Atto488).Nrxn or Nlgn isoforms are shown in magenta (anti-HA & antigoat-Cy3)."Lines 99-100 The approach appeared to function well, as illustrated by the images in Fig 1F, in which the nanobarcodes can be easily differentiated by the human observer.I suggest rephrasing : As illustrated in Fig 1F, the nanobarcodes can be easily differentiated by the human observer.