A Large-Scale Assessment of Nucleic Acids Binding Site Prediction Programs

Computational prediction of nucleic acid binding sites in proteins are necessary to disentangle functional mechanisms in most biological processes and to explore the binding mechanisms. Several strategies have been proposed, but the state-of-the-art approaches display a great diversity in i) the definition of nucleic acid binding sites; ii) the training and test datasets; iii) the algorithmic methods for the prediction strategies; iv) the performance measures and v) the distribution and availability of the prediction programs. Here we report a large-scale assessment of 19 web servers and 3 stand-alone programs on 41 datasets including more than 5000 proteins derived from 3D structures of protein-nucleic acid complexes. Well-defined binary assessment criteria (specificity, sensitivity, precision, accuracy…) are applied. We found that i) the tools have been greatly improved over the years; ii) some of the approaches suffer from theoretical defects and there is still room for sorting out the essential mechanisms of binding; iii) RNA binding and DNA binding appear to follow similar driving forces and iv) dataset bias may exist in some methods.


Author Summary
Nucleic acid binding sites in proteins are functionally important in a majority of biological processes. Computationally predicting these binding sites can help the biological community in understanding the nucleic acid binding proteins in the very first step. The emergence of nucleic acid binding site prediction programs and web servers during the last decade shows a great diversity in various aspects. Besides, some binding site prediction related questions, such as i) can RNA binding sites be distinguished from DNA binding sites? ii) can RNA and DNA binding sites be predicted in the same model?, have not been fully answered. Here, we benchmarked 19 web servers and 3 stand-alone programs on 41 previously reported data sets and analyzed the prediction results in different aspects to show a more complete view of how well these programs can perform and how they can be used. We hope to demonstrate some key points for unbiased comparison for further development of similar prediction programs.

Introduction
Protein-nucleic acid (RNA/DNA) bindings play crucial roles in most biological processes [1] and the detection of the functional sites/regions in proteins is an important step for structurally understanding the molecular mechanism of the biological processes. Compared with the vast number of protein-nucleic acid interactions in bio-systems (Supplementary Note 1 in S1 Text), the experimental determination of binding sites is always difficult, demanding and not always readily feasible. Hence, computational prediction of nucleic acid binding sites has been an established field in computational and molecular biology over the past two decades.
The prediction approaches are diverse in many aspects, which results in controversies over technical details and renders difficult to make totally fair comparisons [2]. Further, previous reviews [3][4][5][6] only assessed at small scale the available datasets. Currently, RNA-and DNAbinding residue predictions are always treated as different problems, or trained with different data sets within the same model [7][8][9][10][11]. However, whether RNA-and DNA-binding proteins (Supplementary Note 2 in S1 Text) exploit different driving forces is not established and it is known that some proteins do bind both types of nucleic acids. Very recently, Yan and coworkers noticed also that prediction programs are unable to distinguish between DNA and RNA binding proteins and concluded that one should compare RNA-and DNA-binding site predictors together [6]. The definition of a nucleic acid binding residue is not standardized with definitions ranging from distance cutoffs [8,9,[12][13][14][15] to the enumeration of non-covalent contacts [16][17][18][19](Supplementary Note 3 in S1 Text). This leads to ambiguities goal in the problem and variations in prediction accuracy. Besides, tens of training and test sets of variable sizes have now been curated by developers during the development of computational approaches. How to avoid bias in a dataset is nontrivial. Further, the assessment criteria are still arguable, e.g. whether all residues from different proteins should be compared together (Supplementary Note 4 in S1 Text). In addition, programs differ in their approaches (Supplementary Note 5 in S1 Text) making a fair assessment difficult. Finally, the distribution and ease-of-use of the programs greatly determine their help to the users in the biological community.
In this report, we present a large-scale assessment of 19 currently available web servers and 3 stand-alone prediction programs in nucleic acid binding site prediction, which is 24 predictors in total, on 41 different datasets derived from structures of protein-nucleic acid complexes in the PDB, including more than 5000 proteins. We use a hierarchical definition of binding sites and various assessment criteria for reference. We analyze differences i) between RNA binding site prediction and DNA binding sites predictions; ii) between binary prediction and continuous scores; iii) between sequence-based prediction and structure-based ones; and finally iv) between original and updated programs. The large-scale analysis should be helpful to developers and users.

Results
The prediction of nucleic acid binding sites is usually determined by three main factors: the definition of a binding site, the assessment criteria and the datasets. Currently, there is no universal definition of a binding site and a minimum distance cutoff between interacting residues is most frequently applied. However, different distance cutoffs lead to accuracy variations while a single cutoff biases certain prediction programs. In Fig 1A is displayed a real-world case where a distance cutoff of 6.0Å leads to a two times higher number of binding sites than that obtained with a cutoff of 3.5Å. S1 Table and S12 Fig show that this difference is a general distribution rather than a rare case. And, in S2 Table, the data show that with a cutoff of 3.5Å used for prediction and 6Å for the definition of the binding sites, the final specificity is 100% but the sensitivity is as low as 51-62% (high false negative rate) with a total accuracy (ACC) decrease down to~90% (details are discussed in Supplementary Note 6 in S1 Text). To fully capture the accuracy variance resulting from a distance cutoff, a hierarchical definition with distance cutoffs ranging from 3.5 to 6Å using 0.5Å as step was used and the distributions of the accuracies were plotted. Besides, the prediction accuracy highly depends on the dataset used for testing. Normally, the largest the training set the better the prediction model, thus making it difficult to compare different programs. However, a good prediction program should show stable accuracy on all the datasets, a feature that cannot be achieved by biased predictions even with a large training set (Supplementary Note 7 in S1 Text). We used 41 datasets to accentuate the possibility of biased prediction by the programs. Finally, different criteria are measured to show different aspects of the programs. The webservers assessed include BindN [8], BindN+ [9], RNABindR [20], RNABindRPlus [21], DBS-Pred [12], DBS-PSSM [13], KYG [14], PRBR [22], PPRInt [23], DNABINDPROT [24], ProteDNA [25], DISPLAR [10], DR_bind1 [26], aaRNA [27], RBscore [28], RBRDetector [29], DNABind [30], xypan [31] and RNAProSite (lilab.ecust. edu.cn/NABind/), while the programs are Predict_RBP [17], PRNA [32] and RBRIdent [33]. Previously reported prediction approaches have been summarized in Table 1. Slow programs DR_bind1 and RBRDetector were only tested on part of the datasets and the results are '+' marks the binding sites while '-' marks non-binding sites. With a distance cutoff of 6.0Å, 40 residues are defined as binding sites, which is twice that obtained with a cutoff of 3.5Å; B) Two metrics to measure prediction accuracy in terms of AUC. Old metric mix all the residues from all the proteins together for comparison, then measure AUC on the mixed data. Metric in this work measures AUC for each protein and average the AUC values considering protein length. C) A scheme to illustrate the irrelevant comparison between binding sites of a protein and the non-binding sites on another protein. As protein A and protein B may have different size of nucleic acid binding region and binding affinity they are possible to have different energy funnels. The dashed region shows the binding region of the two proteins. Binary assessment, which mixes all residues together, will certainly include comparison between nonbinding sites of protein A and binding sites of protein B, shown by the double arrowed line. provided in the supplementary information. metaDBSite [34] shows same result as BindN and was not tested explicitly.

Overall prediction performance of accuracy and stability
As demonstrated in Fig 1B and 1C, the standard way to measure the area under the receiver operating characteristic (ROC) curve (AUC) would include irrelevant comparisons between binding sites on one protein and non-binding sites on another protein. We assessed the nucleic acid binding site prediction ability by calculating the AUC for each protein and average AUCs on a dataset (wAUC or mAUC, Supplementary Note 4 in S1 Text). Fig 2 demonstrates a general assessment result in terms of wAUC (mean AUC considering the protein length), while mAUC (mean AUC) and tAUC (total AUC that compare all proteins together) are found in S1 A successful prediction program should demonstrate stable predictive ability on all the criteria of the assessment. Three criteria (MAVR, MAV and CAVR, described in Methods), stand for distance cutoff dependent accuracy variances (wAUC) were used to assess the stability. It can be deduced from Fig 3A that the prediction KYG, RNAProSite, RNABindR, ProteDNA, RBscore, DISPLAR and DNABINDPROT are less dependent on distance cutoff (mAUC and tAUC based results could be found in S3 Fig and S4 Fig). Generally, programs tend to favor the distance cutoff used during training (Supplementary Note 11 in S1 Text). In terms of data set, standard deviations of AUC (sAUC) were measured as criteria to assess the stability and is shown in Fig 3B. The accuracies of DBS-PSSM, RBscore, aaRNA, DBS-Pred, ProteDNA and DNABINDPROT are more stable when varying the assessment data sets. Considering both prediction accuracy and stability, the 'barrel effect' applies to the predictions with dataset bias and a program can best be assessed by its minimum accuracy value in all the datasets. In Fig  3C, RNA binding site predictors are assessed on RBP while DNA binding predictors on DBP, the minimum wAUC measured by all distance cutoffs is taken as their prediction ability. Fig  3D shows the minimum wAUC applied on all the datasets. A more complete list can be found in Table 2.

Detailed comparisons
Some programs tested here are only designed to predict RNA binding sites while others DNA binding sites. However, when tested together, we find that some of the programs show predictive ability on both types of proteins. For instance, RNABindR, KYG, aaRNA, RNAProSite and RBscore are developed on RBP and never trained on DBP, but they also show predictive ability on DBP. In terms of sequence-based methods, RNABindR even demonstrates higher prediction accuracy than most DNA-binding site predictors. aaRNA and RBscore show even higher accuracies for DBP than for RBP. The RNA binding site prediction mode of BindN+ also shows prediction ability on DBP, but its DBP prediction mode has a much lower accuracy on RBP.
Together with the problem of binding site prediction, it is very interesting to find out whether there is a program that can discriminate RNA binding residues from DNA binding residues. This discrimination require three assumptions: i) residues from different proteins can be compared; ii) RNA and DNA binding is driven by different driving forces; iii) such driving forces have been explored by current programs. Previous work from Yan et al. [6] analysed the cross-prediction between RNA-and DNA-binding residue predictors and concluded that they General accuracy distribution based on wAUC. wAUC as assessment criterion of all programs on all datasets with hierarchical definition of binding sites from 3.5 to 6Å. wAUC is the weighted arithmetic mean of AUC and is considered as the criterion to assess the prediction accuracy of the predictors. It is plotted as rainbow colors from highest accuracy of red to lowest accuracy of blue. Each grid in the plot show the wAUC of a predictor (subtitle) on a certain are unable to properly separate DNA-from RNA-binding residues. We performed a more explicit large-scale test and assessed this discrimination by mixing DNA binding residues of a data set with RNA binding residues of another data set. According to Fig 4, we find several machine learning based approaches display a discriminative ability for RNA binding residues, including PRNA, Predict_RBP, RNABindRPlus and RBscore_SVM. However, this cannot guarantee predictive ability, since all of the programs have AUC <0.5 on some data sets, which means the programs favor the wrong type of residue. Therefore, we come to the same conclusion than reached by Yan et al. [6], i.e. that none of the current existing predictors can properly distinguish DNA-binding residues from RNA binding ones. Further, we find that the programs PRNA, Predict_RBP, RNABindRPlus and RBscore_SVM have similar distribution for this test while their wAUC distributions on Fig 2 are also similar. This result implies that these methods have similar prediction accuracies and similar preferences on datasets.
Comparing the updated webservers with the old ones, we find obvious improvement over the years. BindN+ shows consistent improvement over BindN, when programs integrating an homologous search approach, RNABindRPlus, DNABind and RBRDetector show accuracy increase on some of the datasets. New upcoming webservers, aaRNA, RNAProSite and RBscore, show the top ranking performances. Still, xypan and RBRIdent, both improved versions of PRNA, do not display enough effectiveness in this large-scale test.
The AUC is a criterion of assessment that presents a bias towards the score-based predictions rather than binary predictions (binding or not-binding). However, some programs, such as DISPLAR, include a predefined prediction process and give only binary results. On the contrary, in order to obtain binary predictions, some score-based programs only include arbitrary cutoffs, which are not favored by the binary assessment criteria. In a binary comparison, we have to mix up all the proteins in assessment and balance between specificity and sensitivity. Traditional binary assessment criteria are used, including specificity (S5 Fig), sensitivity (S6  Fig), precision (S7 Fig), accuracy (S8 Fig), F1 score (S9 Fig) and Matthew correlation coefficiency (S10 Fig). A general summary of minimum performance is listed in Table 2.
Comparing structure-based predictors with sequence-based ones in Table 2 and Fig 2, we can easily find out that some structure-based predictors, aaRNA, RBscore and RNAProSite, are consistently better in performance regardless of the nucleic acid type. This implies that sequence-based approaches that attempt to incorporate predicted structure features such as solvent accessibility and electrostatics cannot capture the real structural features that govern nucleic acid binding. Many sequence-based predictors do not guarantee an AUC above 0.7, which can hardly be considered as meaningful predictions, since an AUC~0.5 is equivalent random guess.
We find that aaRNA shows similar level of wAUC on dataset meta_R44 (0.82) and Sung-wook_R267(0.83), but its sensitivity on meta_R44 (0.8) is much higher than on Sung-wook_R267(0.52) while specificity shows the opposite, 0.73 vs. 0.89. Similar cases could also be found in many other programs. In fact, these programs show stable prediction accuracies on both of the sets, but the binary defined sensitivity is a trade-off of specificity and determined by a pre-set cutoff. DNABINDPROT and DISPLAR show top rank specificities and accuracies, which was not described by the AUC distribution. However, when regard to sensitivity, DNA-BINDPROT ranks at the bottom and DISPLAR is lower than median. This implies that these programs sacrifice sensitivity to gain specificity and accuracy, leading to low true positive rates. data set (x-axis) assessed according to the binding sites defined by a certain distance cutoff (y-axis). For each subplot, DBP data sets and RBP data sets are separated by bold line. The last two data sets are mixed with DBP and RBP. RBscore_P627 is a non-redundant data set by removing cases of sequence identity >25%. All_P5114 is a mixture of all data sets.
doi:10.1371/journal.pcbi.1004639.g002  This demonstrates that proteins in different environments can have different affinities to nucleic acids and vary in the size of their binding interface. Thus, the use of fixed cutoffs to define binary prediction may misinterpret the binding reality (Supplementary Note 4 in S1 Text).
Can the predictions represent a binding funnel around the binding interface?
When we color the protein surface residues with the prediction scores as hierarchical colors, we find that the prediction score of some non-binding residues near the binding region are lower than that of the binding residues but higher than other non-binding ones, Fig 5A. The residues around a protein surface are more likely to form a binding funnel than abruptly change from binding region to non-binding ones, which leads to the conclusion that the use of only binary definition and single distance cutoff in the assessment is not feasible. As the residues can gradually change from binding to non-binding region, there could be a correlation between the distance from a residue to the core binding region and the predicted binding score. In Fig 5B, the distance to the core binding region is partly represented by the distance to the RNA ligand, and we do find that such a correlation exists in residues around the binding region (within 12Å from the RNA ligands). Although this region maybe smaller for small proteins and the correlation is not necessarily linear, we can roughly measure the correlation by the Pearson correlation coefficient. Fig 5C illustrates this distribution for some programs. We find that the programs of higher accuracies also display higher Pearson correlation coefficients. This assessment could be further optimized if the distance to the core binding region is well defined.

Tests on newly solved structures
Another way to show the predictive ability is to test the newly solve complex structures, since most of the predictors were developed on datasets before 2013. We, thus, collected the proteinnucleic acid complexes solved after 2014, 31 DBP and 15 RBP, as test sets. Because these cases are non-homologous with all the training and test sets of all the programs, they can be taken as an independent test set. This test is similar to the blind tests of CASP [65], RNA-Puzzles [66,67] and CAFA [68]. If a prediction program really has predictive ability rather than the ability of interpolation, it should show high accuracy on these new data. The homologous search approaches of some programs, such as RNABindRPlus and DNABind, could be excluded by this assessment. According to Fig 2, we find that some of the programs show much lower accuracies on these new data than expected according to previous performance (Supplementary Note 7 in S1 Text). But, overall, the results on the newly published data sets are similar to the performance on some 'difficult sets' such as meta_R44 and RBscore_R117. The values presented in Table 2 indicate also that many machine learning based methods show their minimum performance in these two new datasets. This highlights the data set bias problem of the machine learning approaches that can only demonstrate predictive ability in terms of interpolation. For this reason, a fair comparison indeed require for blind tests independent datasets that do not have any relation to the known datasets.

Discussion
Rather than achieving a high level of accuracy on existing datasets, both an increase in knowledge and understanding of the driving forces and mechanisms for protein-nucleic acids binding are required in order to improve the accuracy on all datasets. Although most of the programs have been validated with curated datasets, we find that some of the programs do not Nucleic Acids Binding Site Prediction Benchmarking consistently show high predictive ability on all the datasets. That is to say, these programs are poor at recapitulating the main factors key to protein-nucleic acid binding. According to results on all the programs, some RNA binding site prediction programs (RBscore, RNAProSite, aaRNA and BindN+) show predictive ability in DNA binding site Fig 5. Correlation between prediction score and nucleic acid binding funnel on protein surface. A) Relationship between the minimum distance from a residue around the binding interface (within 12Å) to its RNA ligand (x-axis) and the RBscore of the residue (y-axis). Generally, the RBscore drops when the distance to RNA ligand increases. B) Pearson correlation coefficient, color-coded in rainbow colors, between minimum distance from a residue around the binding interface (within 12Å) to its RNA ligand and the prediction score. The higher Pearson correlation coefficient a predictor has, the more likely its prediction score can display the energy funnel of nucleic acid binding. C) Examples of prediction scores plotted on protein (16S rRNA (adenine(1408)-N(1))methyltransferase) surfaces as rainbow color, higher binding score region are shown in red. prediction, but hardly any DNA binding site prediction program demonstrates high level of accuracy in RNA binding site prediction. Thus, the key features of RNA binding residues and DNA binding residues are similar and can be better captured by RBP-based datasets than by DBP-based datasets. Besides, we find the most stable and accurate programs are aaRNA, RNA-ProSite and RBscore, all programs that take the advantage of protein structure, while other sequence-based programs are less stable in terms of both data set and distance cutoff, implying that important structural features cannot be fully captured by sequence-based programs making them less accurate.
The previous assessments of the predictive ability were mainly focused on two approaches, comparisons between previously reported results [4] and with small tests [6] (Supplementary Note 8 in S1 Text). However, as shown by this large-scale test, datasets can be biased and tests on one or a few test sets cannot testify the bottom line of predictive ability of a program. Also, comparisons with previously reported results are indirect, stressing the importance of largescale tests and comparisons. This work has systematically benchmarked most currently existing programs in various aspects to complement the loophole of previous assessments. Further, we provide and regularly maintain all the test sets in this work on our web site allowing benchmarking of novel methods on all these data sets. Thus, new programs can be directly compared with all the existing programs and merits of the programs can be demonstrated in a straightforward manner.
Finally, we notice also that some existing binding site prediction approaches contain theoretical drawbacks: 1. For predictions leading to a binary classification, the mixing together of all proteins is arguable (Supplementary Note 4 in S1 Text); 2. The use of non-orthogonal but redundant features in prediction renders loose the relationships between feature and prediction (Supplementary Note 10 in S1 Text); 3. Slide-window approaches in sequence-based methods do not consider the real spatial environment of the residues (Supplementary Note 9 in S1 Text). Therefore, there is still room for the search for better alternatives.
In order to be useful to the research community, it is very important to make the prediction programs and web servers available and user-friendly. Some current programs require special computational skills, some are slow in efficiency and some require special formatted input, stressing the importance of ease-of-use and robustness of a prediction web server.

Datasets
All of the data sets were built based on protein-nucleic acid co-crystal structures extracted from the PDB. 23 RNA binding protein datasets and 16 DNA binding protein datasets were collected from previous studies and listed in Table 3. Some unreasonable cases were excluded from the assessment datasets: 1) the presence of a DBP in a RBP set (PDB ID 1a1v); 2) superseded PDB structures; 3) peptides shorter than 20 residues; 4) weak and uncertain nucleic acid binding proteins including those with less than three binding residues; 5) PDB chains containing only Cα atoms; 6) proteins constituted by two separate short peptides. Other two data sets of RBP and DBP after 2014 have been curated. Sequence similarity of 25% have been used as cutoff to remove redundancy from other data sets by PISCES [69]. These two data sets include 15RBP and 31 DBP respectively. All the data sets could be downloaded on website: http:// ahsoka.u-strasbg.fr/nbench/.

Definition of binding sites
The minimum distance from any atom of a protein residue to any nucleic acid atom defines the distance from a protein residue and the nucleic acid. Nucleic acid binding sites are defined when such distances to the nucleic acid are shorter than certain thresholds. The range between 3.5 to 6Å, with 0.5Å step, was used as hierarchical thresholds to define binding residues in test sets. Besides, a nucleic acid binding residue always requires accessible surface area change (ΔASA>0Å 2 ) upon complex formation with the nucleic acid. Accessible surface area is measured by NACCESS [72] with default parameters.

Assessment criteria
Receiver Operating Characteristic (ROC) curve together with Area Under Curve (AUC) is always used as criterion for accuracy [73]. We define the accuracy of a set of proteins by averaging accuracies of all proteins. We suggest the weighted arithmetic mean of AUC (wAUC) and mean of AUC (mAUC) as two criteria of accuracy for a set of proteins: For a protein i, AUC(i) is its AUC value and len(i) is length of the protein, while N is the number of proteins in a dataset. We call the AUC that compare all the residues in a dataset together as total AUC (tAUC) and use it as a reference for comparison.
We define standard deviation of AUC of a data set as sAUC to show the accuracy stability varying the data sets: Other binary criteria include specificity, sensitivity, precision, accuracy, F1 score and Matthews correlation coefficient: TP is true positive, TN is true negative, FP is false positive and FN is false negative. P is total positive and N is total negative. MAVR, maximum accuracy variation rate, is defined as: Δd is the difference between two distance cutoffs used to define binding sites. ΔwAUC is the resulted accuracy variance defined by wAUC, this can also be replaced by mAUC or tAUC.
MAV is the maximum accuracy variation and CAVR is the cumulated accuracy variation rate: MAVR, MAV and CAVR are criteria to assess the distance cutoff dependent variation in accuracy.
The correlation coefficient between prediction score and minimum distance to nucleic acid is defined by the Pearson correlation coefficient: min(d) is the minimum distance from a residue to any nucleic acid atom, s is the prediction score, cov is the covariation and σx is the standard deviation.

Use of prediction programs
Li lab and Xiaoyong Pan provided RNAProSite and xypan prediction results respectively, with default parameter. aaRNA was used by inputting the structure file with default parameters. RNABindR and RNABindRPlus were predicted with default parameters, while removing 95% sequence identity in RNABindRPlus. KYG was predicted with command line using "method_type = 8" option. RBscore_SVM was based on the training set of R246. DBS-PSSM, DBS-Pred, RBRIdent, PPRInt, PRBR, DNABind, RBRDetector and ProteDNA were used with default parameter. The DISPLAR program was provided by Sanbo Qin and was run with default parameters. BindN and BindN+ were used by default parameters, while suffix "_RNA" and "_DNA" are RNA mode and DNA mode respectively. PRNA and Predict_RBP were trained on PRNA_R205 and BindN_R107 respectively, without cross-validation and applied with default parameters. DNABINDPROT was predicted with option "Fast 1" and DR_bind1 was predicted in the RNA mode. N-terminal and C-terminal residues not predicted by PRBR are taken as non-binding sites and assigned 0 as prediction score. For binary predictions of DNABINDPROT, DISPLAR, DBS-Pred and DBS-PSSM, positive sites are assigned a prediction score of 1, while negatives are assigned 0.

Availability
All data in this assessment are available on NBench web site http://ahsoka.u-strasbg.fr/nbench/.
(DOCX) S1  The plot is based on test on data set 'DR_bind1_R69'. X-axis show the distance cutoff used to define nucleic acid binding sites, while Y-axis show the resulted wAUC value. We find the wAUC values vary for different distance cutoffs, and the distributions for different programs are different. (EPS)