Table 1.
Comparison of transformer-based time series forecasting models.
Fig 1.
Schematic diagram of mixed time series pooling decomposition block.
Fig 2.
Schematic overview of the proposed. KEDformer method. Knowledge Extraction Attention module (KEDA, blue block), Mixed time series pooling decomposition (MSTP, yellow block).
Fig 3.
In the experiment analyzing model computational efficiency and performance, four different models are used to perform long-term time series forecasting tasks on the Exchange dataset.
The input length is set to I = 96, and the prediction lengths are .
Fig 4.
The synergistic effect of the Knowledge Extraction Attention module and the time series pooling decomposition method.
Table 2.
Description of the experimental environment.
Table 3.
Table of optimal hyperparameter settings.
Table 4.
Multivariate results.
Table 5.
Univariate results.
Table 6.
Ablation results.
Fig 5.
Visualization of time series decomposition.
In the left subfigure (a), the raw time series data is shown without decomposition, displaying interwoven fluctuations and trends. In contrast, the right subfigure (b) presents the time series decomposed into three components: the original time series in purple, the trend-cyclical component in beige, and the seasonal component in teal.
Fig 6.
Visualization of time series decomposition results.
In a comparative experiment that controls the number of KEDformer mechanisms during the encoding and decoding processes, we set the input length I = 96 and the prediction lengths .
Fig 7.
Impact of KEDattention mechanisms on model computational efficiency.
The input length is set to I = 96, and the prediction steps are . The time required for each epoch is used as an indicator of the model’s computational speed.
Table 7.
Complexity analysis of space and time for different forecasting models.