Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

Integral of Copula Differences for different dimensions.

More »

Table 1 Expand

Fig 1.

D2 vs D1 for left node.

(a) shows an example Pareto frontier (red circles) for the left child node for the first split of a specific tree (the D1 and D2 values are denoted by D1L and D2L respectively). (b) shows that the Pareto frontier can be approximated by two straight lines: one with slope greater than 1 and another with slope less than 1.

More »

Fig 1 Expand

Fig 2.

D2 vs D1 for right node.

(a) shows an example Pareto frontier (red circles) for the right child node for the first split of a specific tree (the D1 and D2 values are denoted by D1R and D2R respectively). (b) shows that the Pareto frontier can be approximated by two straight lines: one with slope greater than 1 and another with slope less than 1.

More »

Fig 2 Expand

Fig 3.

Scatter plot of α’s across the trees.

(a) and (b) are scatter-plots for the first split of all the trees for α > 1 and α < 1 respectively.

More »

Fig 3 Expand

Fig 4.

Two multivariate regression trees trained on the same input X and same output responses [Y1,Y2] but the node cost criteria being copula based (Tree{C, [Y1,Y2]}) and covariance based (Tree{V, [Y1,Y2]}) respectively.

The empty circles represent leaf nodes and the circles enclosing a number signifies a split node; the number inside the circle indicates the featured selected on that node for splitting.

More »

Fig 4 Expand

Fig 5.

CDF created from left and right child node for a single split using CMRF.

It is compared visually with the original CDF created from the training samples.

More »

Fig 5 Expand

Fig 6.

CDF created from left and right child node for a single split using VMRF.

It is compared visually with the original CDF created from the training samples.

More »

Fig 6 Expand

Table 2.

Variable importance measure calculated using CMRFY1, Y2 and VMRFY1, Y2.

More »

Table 2 Expand

Table 3.

5 fold CV results for GDSC Dataset drug sensitivity prediction for four drug sets in the form of correlation coefficients.

VMRF, CMRF represent Multivariate Random Forest using Covariance and Copula respectively. KBMTL represents Kernelized Bayesian multitask learning (Parameters considered are 200 iterations, α = β = 1 and subspace dimensionality = 20).

More »

Table 3 Expand

Table 4.

5 fold CV results for GDSC Dataset drug sensitivity prediction for four drug sets in the form of MAE and NRMSE for RF, VMRF, CMRF and KBMTL approaches.

More »

Table 4 Expand

Fig 7.

Scatter plots of predicted response vs original response for Erlotinib and Lapatinib (GDSC).

Here corr-coef stands for correlation coefficient between predicted response and output response.

More »

Fig 7 Expand

Table 5.

5 fold CV results for CCLE Dataset drug sensitivity prediction for four drug sets in the form of correlation coefficients for RF, VMRF, CMRF and KBMTL.

More »

Table 5 Expand

Table 6.

5 fold CV results for CCLE Dataset drug sensitivity prediction for four drug sets in the form of MAE and NRMSE for RF, VMRF, CMRF and KBMTL.

More »

Table 6 Expand

Fig 8.

Scatter plots of predicted response vs original response for Crizotinib and PHA-665752 (CCLE).

Here corr-coef stands for correlation coefficient between predicted response and output response.

More »

Fig 8 Expand

Fig 9.

Protein-protein interaction network observed between top regulators found from CMRF in GDSC dataset SC1.

Disconnected nodes are hidden.

More »

Fig 9 Expand

Fig 10.

Protein-protein interaction network observed between top regulators found from VMRF in GDSC dataset SC1.

Disconnected nodes are hidden.

More »

Fig 10 Expand