Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

Comparison of transformer-based time series forecasting models.

More »

Table 1 Expand

Fig 1.

Schematic diagram of mixed time series pooling decomposition block.

More »

Fig 1 Expand

Fig 2.

Schematic overview of the proposed. KEDformer method. Knowledge Extraction Attention module (KEDA, blue block), Mixed time series pooling decomposition (MSTP, yellow block).

More »

Fig 2 Expand

Fig 3.

In the experiment analyzing model computational efficiency and performance, four different models are used to perform long-term time series forecasting tasks on the Exchange dataset.

The input length is set to I = 96, and the prediction lengths are .

More »

Fig 3 Expand

Fig 4.

The synergistic effect of the Knowledge Extraction Attention module and the time series pooling decomposition method.

More »

Fig 4 Expand

Table 2.

Description of the experimental environment.

More »

Table 2 Expand

Table 3.

Table of optimal hyperparameter settings.

More »

Table 3 Expand

Table 4.

Multivariate results.

More »

Table 4 Expand

Table 5.

Univariate results.

More »

Table 5 Expand

Table 6.

Ablation results.

More »

Table 6 Expand

Fig 5.

Visualization of time series decomposition.

In the left subfigure (a), the raw time series data is shown without decomposition, displaying interwoven fluctuations and trends. In contrast, the right subfigure (b) presents the time series decomposed into three components: the original time series in purple, the trend-cyclical component in beige, and the seasonal component in teal.

More »

Fig 5 Expand

Fig 6.

Visualization of time series decomposition results.

In a comparative experiment that controls the number of KEDformer mechanisms during the encoding and decoding processes, we set the input length I = 96 and the prediction lengths .

More »

Fig 6 Expand

Fig 7.

Impact of KEDattention mechanisms on model computational efficiency.

The input length is set to I = 96, and the prediction steps are . The time required for each epoch is used as an indicator of the model’s computational speed.

More »

Fig 7 Expand

Table 7.

Complexity analysis of space and time for different forecasting models.

More »

Table 7 Expand