Fig 1.
Location distribution of the four initial populations.
Fig 2.
Improved the GWO principle of the golden sine algorithm.
Fig 3.
ω curve of 1000 iterations.
Fig 4.
Elman neural network structure.
Fig 5.
Elman and SGWO-Elman optimization process.
Fig 6.
Parameter optimization of Elman neural network with SGWO for prediction.
Table 1.
Benchmark functions.
Table 2.
Comparison of experimental results in 50 dim.
Table 3.
Comparison of experimental results in 100 dim.
Table 4.
Comparison of experimental results in 500 dim.
Fig 7.
The comparison of the convergence curves.
Table 5.
Friedman test for the results obtained at 50, 100 and 500 dimensions.
Table 6.
Mean ranks of each algorithm in different dimensions.
Table 7.
Mean ranks difference between each algorithm and SGWO.
Table 8.
Wilcoxon rank sum test p-value of benchmark function.
Table 9.
Algorithm ranking under different dimensions.
Table 10.
Total ranking of algorithms in different dimensions.
Fig 8.
Boxplot of fitness in various algorithms.
Fig 9.
Convergence curves for different strategies.
Table 11.
Experimental results of three strategies in 50, 100 and 500 dims.
Table 12.
Sensitivity analysis of λ and p.
Fig 10.
ω curve with different parameters.
Table 13.
Tension/Compression spring design problem results in other algorithms.
Table 14.
Large-scale optimization results in 1000 dimensions.
Table 15.
Basic information about the six datasets.
Table 16.
The number of hidden layers corresponding to different data sets.
Table 17.
Comparison of experimental results of the first group on MSE metric.
Table 18.
The prediction rankings of each algorithm on six datasets.
Fig 11.
MSE of various datasets.
Fig 12.
Boxplot of MSE in various datasets.
Fig 13.
Mean training time of different algorithms.
Table 19.
The mean training time(s) of each algorithm on six datasets.
Table 20.
Comparison of experimental results of the second group.
Fig 14.
MSE of various datasets.
Fig 15.
Boxplot of MSE in various datasets.
Fig 16.
The implementation process of SGWO-RBF and SGWO-LSTM.
Table 21.
The experimental errors of the three algorithms on six datasets.