Skip to main content
Advertisement

< Back to Article

A benchmark driven guide to binding site comparison: An exhaustive evaluation using tailor-made data sets (ProSPECCTs)

Fig 10

Evaluation of different binding site comparison tools with respect to the data set of Barelier et al.[64].

A-C) The ROC curves for residue- (A), surface- (B), and interaction-based (C) comparison methods. The name of the tool is colored according to its corresponding ROC curve. The binding site comparison tools are sorted in descending order with respect to the AUC. (A) The thin red line represents the resulting ROC curve for SiteAlign when using the distance d1. (B) Thin lines represent the ROC curves for ProBiS, Shaper, Shaper(PDB), VolSite/Shaper, VolSite/Shaper(PDB), SiteEngine and SiteHopper when using the scoring schemes SVA, FitTversky (color), FitTversky (color), RefTversky (color), Tanimoto (fit), distance, and ShapeTanimoto, respectively. (C) The thin line represents the resulting ROC curve for IsoMIF and the taniMW score. D-F) EFs for residue- (D), surface- (E), and interaction-based (F) comparison methods. A linear color gradient ranging from white for the highest value to gray to black for the lowest value was applied for the EFs at different percentages of screened data set.

Fig 10

doi: https://doi.org/10.1371/journal.pcbi.1006483.g010