Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Location distribution of the four initial populations.

More »

Fig 1 Expand

Fig 2.

Improved the GWO principle of the golden sine algorithm.

More »

Fig 2 Expand

Fig 3.

ω curve of 1000 iterations.

More »

Fig 3 Expand

Fig 4.

Elman neural network structure.

More »

Fig 4 Expand

Fig 5.

Elman and SGWO-Elman optimization process.

More »

Fig 5 Expand

Fig 6.

Parameter optimization of Elman neural network with SGWO for prediction.

More »

Fig 6 Expand

Table 1.

Benchmark functions.

More »

Table 1 Expand

Table 2.

Comparison of experimental results in 50 dim.

More »

Table 2 Expand

Table 3.

Comparison of experimental results in 100 dim.

More »

Table 3 Expand

Table 4.

Comparison of experimental results in 500 dim.

More »

Table 4 Expand

Fig 7.

The comparison of the convergence curves.

More »

Fig 7 Expand

Table 5.

Friedman test for the results obtained at 50, 100 and 500 dimensions.

More »

Table 5 Expand

Table 6.

Mean ranks of each algorithm in different dimensions.

More »

Table 6 Expand

Table 7.

Mean ranks difference between each algorithm and SGWO.

More »

Table 7 Expand

Table 8.

Wilcoxon rank sum test p-value of benchmark function.

More »

Table 8 Expand

Table 9.

Algorithm ranking under different dimensions.

More »

Table 9 Expand

Table 10.

Total ranking of algorithms in different dimensions.

More »

Table 10 Expand

Fig 8.

Boxplot of fitness in various algorithms.

More »

Fig 8 Expand

Fig 9.

Convergence curves for different strategies.

More »

Fig 9 Expand

Table 11.

Experimental results of three strategies in 50, 100 and 500 dims.

More »

Table 11 Expand

Table 12.

Sensitivity analysis of λ and p.

More »

Table 12 Expand

Fig 10.

ω curve with different parameters.

More »

Fig 10 Expand

Table 13.

Tension/Compression spring design problem results in other algorithms.

More »

Table 13 Expand

Table 14.

Large-scale optimization results in 1000 dimensions.

More »

Table 14 Expand

Table 15.

Basic information about the six datasets.

More »

Table 15 Expand

Table 16.

The number of hidden layers corresponding to different data sets.

More »

Table 16 Expand

Table 17.

Comparison of experimental results of the first group on MSE metric.

More »

Table 17 Expand

Table 18.

The prediction rankings of each algorithm on six datasets.

More »

Table 18 Expand

Fig 11.

MSE of various datasets.

More »

Fig 11 Expand

Fig 12.

Boxplot of MSE in various datasets.

More »

Fig 12 Expand

Fig 13.

Mean training time of different algorithms.

More »

Fig 13 Expand

Table 19.

The mean training time(s) of each algorithm on six datasets.

More »

Table 19 Expand

Table 20.

Comparison of experimental results of the second group.

More »

Table 20 Expand

Fig 14.

MSE of various datasets.

More »

Fig 14 Expand

Fig 15.

Boxplot of MSE in various datasets.

More »

Fig 15 Expand

Fig 16.

The implementation process of SGWO-RBF and SGWO-LSTM.

More »

Fig 16 Expand

Table 21.

The experimental errors of the three algorithms on six datasets.

More »

Table 21 Expand