Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

Partial derivatives for some common kernel functions: Linear, Polynomial (Poly), Radial Basis Functions (RBF), Hyperbolic tangent (Tanh), and Automatic Relevance Determination (ARD).

More »

Table 1 Expand

Table 2.

Second derivatives for some common kernel functions.

More »

Table 2 Expand

Table 3.

Summary of the formulation for each of the main kernel methods GPR (Gaussian Process Regression, section 3), SVM (Support Vector Machines, section 4), KDE (Kernel Density Estimation, section 5), HSIC (Hilbert-Schmidt Independence Criterion, section 6).

The derivative formulation as well as some related analysis procedures in the literature as well as demonstrated in this paper.

More »

Table 3 Expand

Fig 1.

Different examples of functions, derivatives and sensitivity maps.

Original data (red), the GP predictive function (black), high derivative values are in yellow, close-to-zero derivative values are in gray and negative derivative values are in blue.

More »

Fig 1 Expand

Fig 2.

Signal-to-Noise Ratio (SNR) versus the expected normalized value of different norms () to act as regularizers.

A unregularized (left) and an regularized (right) Kernel Ridge Regression (KRR) model was fitted. The top row shows a few examples of these fitted KRR models with a different quantity of noise added. The red data points are the data with different noise levels, the true function is black and the fitted KRR model is in blue. The second row shows the norm for the different regularizers. All lines were normalized in such a way that they are comparable. The norm of the true signal (SNR = 50 dB) is subtracted from all points so any curve with values below zero require less regularization and any points above zero require more regularization.

More »

Fig 2 Expand

Fig 3.

Visualizing three examples of sensitivity maps in SVM classification.

The top row shows a figure has red and green points to showcase the classes, black points showing the support vectors chosen by the SVM classifier, and a contour map showcasing the same color scheme for the decision function. In the subsequent plots, we plot the sensitivity measures where the high derivative values are in yellow and negative derivative values are in gray. The leftmost column showcases which derivative is plotted.

More »

Fig 3 Expand

Fig 4.

First row: Original data points. Second, third and fourth row: probability density in gray scale (brighter means denser). Second row: derivative direction of the pdf for some data points is represented using red lines. Third row: Hessian eigenvectors for some points represented with blue lines (first eigenvector) and green lines (second eigenvector). Fourth row: points on the ridge computed using the formula proposed in [30], different brightness of green has been computed using the Dijkstra distance over the curve dots (see text for details).

More »

Fig 4 Expand

Fig 5.

Visualizing the derivatives and the modulus of the directional derivative for HSIC in three toy examples.

More »

Fig 5 Expand

Fig 6.

Modification of the input samples to maximize of minimize HSIC dependence between their dimensions (see text for details).

More »

Fig 6 Expand

Fig 7.

Visualizing the spatial maps for the senstivity of the Gaussian process (GP) regression model under different spatial sampling sizes for the Gross Primary Productivity (GPP) [top] and land surface temperature (LST) [bottom] for the summer of 2010 (Jun-Jul-Aug).

The rightmost column shows the summary R2 and Sensitivity for each spatial window size for the GP model.

More »

Fig 7 Expand

Fig 8.

Visualizing the (a) labels, (b) predictions (b), and (c) the 2D representation space for the predictions.

This is the classification problem of drought (red) versus no-drought (green) with the support vectors (black) for the SVM formulation (section 4).

More »

Fig 8 Expand

Table 4.

This table summarizes classification results for the drought and non-drought regions over Eastern Europe using the SVM (Support Vector Machines, section 4) formulation.

More »

Table 4 Expand

Fig 9.

Visualizing the scatter plot of the drought (red) versus no-drought (green) and the support vectors (black) using the SVM section 4 classification algorithm.

We also display the sensitivity of the full derivative and its components: the mask function (tanh) and the kernel function (∂k* αy) based on the predictive mean of the SVM classification results.

More »

Fig 9 Expand

Fig 10.

Principal curves on the ESDC.

Each figure represents the results for GPP at different time periods during the 2010. In each image the mean value of the variable for each location is shown in colormap (minimum blue, maximum red), and the points that belong to the principal curves are represented in green. Different brightness of green has been computed using the Dijkstra distance over the curve dots.

More »

Fig 10 Expand

Fig 11.

Each figure represents different summaries of how HSIC can be used to capture the differences in dependencies between Europe and Russia for GPP and RSM.

(a) shows the HSIC value for Europe and Russia at each time stamp, (b) shows the derivative of HSIC for Europe and Russia at each time stamp, and the mean value for the difference in the (c) HSIC between Europe and Russia for different periods (Jan-May, Jun-Aug, and Sept-Dec), and (d) in the derivative of HSIC for the same periods.

More »

Fig 11 Expand

Fig 12.

First row: The original toy data is displayed as well as the predicted GP model which presents a smoother curve. Second row: the first derivative in the x1,x2 direction and combined direction (the sensitivity) respectively. Third row: the second derivative in the x1, x2 direction and combined direction (the sensitivity) respectively. The yellow colored points represent the regions with positive values, the blue colored points represent the regions with negative values and the gray colored points represent the regions where the values are zero.

More »

Fig 12 Expand