Prediction of hierarchical time series using structured regularization and its application to artificial neural networks

This paper discusses the prediction of hierarchical time series, where each upper-level time series is calculated by summing appropriate lower-level time series. Forecasts for such hierarchical time series should be coherent, meaning that the forecast for an upper-level time series equals the sum of forecasts for corresponding lower-level time series. Previous methods for making coherent forecasts consist of two phases: first computing base (incoherent) forecasts and then reconciling those forecasts based on their inherent hierarchical structure. To improve time series predictions, we propose a structured regularization method for completing both phases simultaneously. The proposed method is based on a prediction model for bottom-level time series and uses a structured regularization term to incorporate upper-level forecasts into the prediction model. We also develop a backpropagation algorithm specialized for applying our method to artificial neural networks for time series prediction. Experimental results using synthetic and real-world datasets demonstrate that our method is comparable in terms of prediction accuracy and computational efficiency to other methods for time series prediction.


Introduction
Multivariate time series data often have a hierarchical (tree) structure in which each upperlevel time series is calculated by summing appropriate lower-level time series. For instance, numbers of tourists are usually counted on a regional basis, such as sites, cities, regions, or countries [1]. Similarly, many companies require regionally aggregated forecasts to support resource allocation decisions [2]. Product demand is often analyzed by category to reduce the overall forecasting burden [3].
Forecasts for such hierarchical time series should be coherent, meaning that the forecast for an upper-level time series equals the sum of forecasts for corresponding lower-level time series [4,5]. Smoothing  used in academia and industry for time series predictions [6,7]. Although these methods provide coherent forecasts for hierarchical time series, they have low accuracy, especially for rapidly changing time series. Another common approach for making coherent forecasts is the use of bottom-up and topdown methods [3,[8][9][10]. These methods first develop base forecasts by separately predicting each time series and then reconcile those base forecasts based on their inherent hierarchical structure. The bottom-up method calculates base forecasts for bottom-level time series and then aggregates them for upper-level time series. In contrast, the top-down method calculates base forecasts only for a root (total) time series and then disaggregates them according to historical proportions of lower-level time series. Park and Nassar [11] considered a hierarchical Bayesian dynamic proportions model for the top-down method to disaggregate upper-level forecasts sequentially. The middle-out method calculates base forecasts for intermediate-level time series and then applies the bottom-up and top-down methods to make upper-and lowerlevel forecasts. However, the bottom-up method often accumulates prediction errors as the time series level rises, and the top-down method cannot exploit detailed information about lower-level time series. Notably, when base forecasts are unbiased, only the bottom-up method gives unbiased forecasts [12].
Hyndman et al. [12] proposed a linear regression approach to optimal base forecasts by the bottom-up method. This forecast reconciliation method worked well for predicting tourism demand [1] and monthly inflation [13], and this approach can be extended to hierarchical and grouped time series [14]. van Erven and Cugliari [15] devised a game-theoretically optimal reconciliation method. Regularized regression models have also been employed to deal with high-dimensional time series [16,17]. Wickramasuriya et al. [5] devised a sophisticated method for optimal forecast reconciliation through trace minimization. Their experimental results showed that this trace minimization method performed very well with synthetic and real-world datasets. Note, however, that all of these forecast reconciliation methods consist of two phases: first computing base forecasts and then reconciling those forecasts based on a hierarchical structure. This study aimed to produce better time series predictions by simultaneously completing these two phases.
Structured regularization uses inherent structural relations among explanatory variables to construct a statistical model [18][19][20]. Various regularization methods have been proposed for multivariate time series [21,22], hierarchical explanatory variables [23][24][25][26], and artificial neural networks [27]. Prediction of multivariate time series is related to multitask learning, which shares useful information among related tasks to enhance the prediction performance for all tasks [28,29]. Tailored regularization methods have been developed for multitask learning [30,31] and applied to artificial neural networks [32]. To the best of our knowledge, however, no prior studies have applied structured regularization methods to predictions of hierarchical time series.
In this study, we aimed to develop a structured regularization method that takes full advantage of hierarchical structure for better time series predictions. Our method is based on a prediction model for bottom-level time series and uses a structured regularization term to incorporate upper-level forecasts into the prediction model. This study particularly focused on applying our method to artificial neural networks, which have been effectively used in time series prediction [33][34][35][36][37][38]. We developed a backpropagation algorithm specialized for our structured regularization model based on artificial neural networks. Experiments involving the application of our method to synthetic and real-world datasets demonstrated that our method was comparable in terms of prediction accuracy and computational efficiency to other methods that develop coherent forecasts for hierarchical time series.

Methods
This section starts with a brief review of forecasts for hierarchical time series. For such time series, we present our structured regularization model and its application to artificial neural networks. A backpropagation algorithm is also described for artificial neural networks with structured regularization.

Forecasts for hierarchical time series
We address the prediction of multivariate time series where each series is represented as a node in a hierarchical (tree) structure. Let y it be an observation of node i 2 N at time t 2 T, where N is the set of nodes, and T is the set of time points. For simplicity, we focus on twolevel hierarchical structures. Fig 1 shows the example of a two-level hierarchical structure with |N| = 7 nodes, where | � | denotes the number of set elements. The nodes are classified as where node 1 is the root (level-zero) node, and M and B are sets of mid-level (level-one) and bottom-level (level-two) nodes, respectively. The associated time series is characterized by the aggregation constraint where I n is the identity matrix of size n. In : Let y t ≔ (y it ) i2N be a column vector comprising observations of all nodes at time t 2 T. Similarly, for a node subset A � N we define y A t ≔ ðy it Þ i2A as the observation vector of nodes i 2 A at time t 2 T. In Fig 1, we have y t ¼ ðy 1t ; y 2t ; y 3t ; y 4t ; y 5t ; y 6t ; y 7t Þ > ; The aggregation constraint (1) is then expressed as or, equivalently, Letŷ t ≔ ðŷ it Þ i2N be a column vector comprising base forecasts at time t 2 T. Note that the base forecasts are calculated separately for each node i 2 N, so they do not satisfy the aggregation constraint (2). For a node subset A � N, we defineŷ A t ≔ ðŷ it Þ i2A at time t 2 T. Such base forecasts can be converted into coherent forecasts satisfying the aggregation constraint (2) by using the reconciliation matrix P ≔ (p ij ) (i,j)2B×N . Specifically, we develop bottom-level forecasts y B t ¼ Pŷ t and use the aggregation constraint (3) to obtain coherent forecasts, as A typical example of a reconciliation matrix is where O m×n is an m × n zero matrix. This leads to the bottom-up method Another example is In this manner, we can make coherent forecasts from various reconciliation matrices. The condition SPS = S is proven to ensure that when base forecasts are unbiased, the resultant coherent forecasts (4) are also unbiased [12]. This condition is also known to be fulfilled only by the bottom-up method [12].

Forecast reconciliation methods
Hyndman et al. [12] introduced the following linear regression model for given base forecasts: where β t ≔ (β it ) i2B is a column vector of bottom-level estimates, and ε t ≔ (ε it ) i2B is a column vector of errors having zero mean and covariance matrix var(ε t ) ≔ S t . The bottom-up method (5) withŷ B t ¼ β t is then used to makes coherent forecasts. If the base forecasts are unbiased and the covariance matrix S t is known, the generalized least-squares estimation yields the minimum variance unbiased estimate of β t . However, the covariance matrix S t is nonidentifiable and therefore impossible to estimate [5].
In contrast, Wickramasuriya et al. [5] focused on differences between observations and coherent forecasts (4), e t ≔ y t Àỹ t ¼ y t À SPŷ t ðt 2 TÞ: The associated covariance matrix is derived as where W t ≔ E½ðy t Àŷ t Þðy t Àŷ t Þ > � is the covariance matrix of base forecasts. The trace of the covariance matrix (6) is minimized subject to the unbiasedness condition SPS = S. This yields the optimal reconciliation matrix and coherent forecasts (4) are given bỹ See Wickramasuriya et al. [5] for the full details. Note, however, that in these forecast reconciliation methods, base forecasts are first determined regardless of the underlying hierarchical structure, then those forecasts are corrected based on the hierarchical structure. In contrast, our proposal is a structured regularization model that directly computes high-quality forecasts based on the hierarchical structure.

Structured regularization model
We consider a prediction model for bottom-level time series. Its predictive value is denoted by the column vectorŷ B t ðΘÞ ≔ ðŷ it ðΘÞÞ i2B , where Θ is a tuple of model parameters. As an example, the first-order vector autoregressive model is represented aŝ The residual sum of squares for bottom-level time series is given by X t2T We also introduce a structured regularization term that quantifies the error for upper-level time series based on the hierarchical structure. Let Λ ≔ Diag(λ) be a diagonal matrix of regularization parameters, where λ ≔ (λ i ) i2N\B is a vector of its diagonal entries. Then, we construct a structured regularization term based on the aggregation constraint (2) as X t2T kΛðy NnB Minimizing this term aids in correcting bottom-level forecasts, thus improving the upper-level forecasts.
Adding the regularization term (9) to the residual sum of squares (8) yields the objective function E(Θ) to be minimized. Consequently, our structured regularization model is posed as Here, matrix Λ adjusts the tradeoff between minimizing the error term (8) for bottom-level times series and minimizing the error term (9) for upper-level time series. In the experiments section, we set its diagonal entries as where λ 1 and λ M are regularization parameters for root and mid-level time series, respectively. After solving the structured regularization model (10), we use the bottom-up method (5) to obtain coherent forecastsỹ when upper-level time series are easier to predict than bottom-level time series. To remedy this situation, we can adopt a methodology proposed by Panagiotelis et al. [39], where the summing matrix is redefined by replacing a bottom-level time series with an upper-level time series.

Application to artificial neural networks
This study focused on application of our structured regularization model (10) to artificial neural networks for time series prediction; see Bishop [40] and Goodfellow et al. [41] for general descriptions of artificial neural networks. For simplicity, we consider a two-layer neural network like the one shown in Fig 2, where the input vector z ð1Þ ≔ ðz ð1Þ i Þ i2B is defined as First, we calculate the vector u ð2Þ ≔ ðu ð2Þ j Þ j2D as the weighted sum of the input entries where W ð2Þ ≔ ðw ð2Þ ji Þ ðj;iÞ2D�B is a weight matrix to be estimated. This vector u (2) is transferred from the input units to hidden units, as shown in  The vector z (2) is transferred from the hidden units to the output units as shown in Fig 2. Finally, we calculate the vector u ð3Þ ≔ ðu ð3Þ k Þ k2B as the weighted sum of the output entries from the hidden units as where W ð3Þ ≔ ðw ð3Þ kj Þ ðk;jÞ2B�D is a weight matrix to be estimated. This process is summarized as where the tuple of model parameters is This neural network outputsŷ B t ðΘÞ ¼ u ð3Þ as a vector of predictive values.

Backpropagation algorithm
We develop a backpropagation algorithm specialized for training artificial neural networks in our structured regularization model (10); see Bishop [40] and Goodfellow et al. [41] for overviews of backpropagation algorithms. Our algorithm sequentially minimizes the following error function for time t 2 T: We first define vectors δ ð2Þ ≔ ðd ð2Þ j Þ j2D and δ ð3Þ ≔ ðd ð3Þ k Þ k2B , which consist of partial derivatives of the error function (16) with respect to intermediate variables (12) and (14) as follows:  (12) and (14), the partial derivatives of the error function (16) It follows from Eq (16) that Algorithm 1 summarizes our backpropagation algorithm.

Performance evaluation methodology
To evaluate out-of-sample prediction performance, we considered training and test periods of time series data, where the training period was used to train prediction models, and the test period was used to compute prediction errors in the trained models. We calculated the rootmean-squared error (RMSE) for each node i 2 N during the test periodT as  (11) Here, we determined parameter values for n and α that minimized RMSE in the training period. During the training period, we tuned regularization parameters λ 1 and λ M through hold-out validation [42]. We adopted two-layer artificial neural networks (Fig 4) for NN+BU, NN+MinT, and NN +SR. Here, predictionŷ it ðΘÞ of each time series depends on its own two lags y i,t−1 and y i,t−2 , and the backpropagation simultaneously updates weight parameters of all the series. Note also that NN+BU is equivalent to NN+SR(0,0). Following prior studies [43,44], we set the number of hidden units to twice the number of input units (i.e., |D| = 4 � |B|). Bias parameters were added to hidden and output units.
We implemented the backpropagation algorithm (Algorithm 1) in the R programming language, with the convergence threshold set as ε = 5 � 10 −5 . The step size was kept constant and set as η = 1 � 10 −5 , which was small enough for the backpropagation algorithm to converge. We employ the logistic sigmoid function (13) as an activation function. The algorithm was repeated 30 times by randomly generating initial values for the parameter Θ from a standard normal distribution. The following sections show average RMSE values with 95% confidence intervals over the 30 trials.

Synthetic datasets
We generated common factors to express correlations among time series. Denote as N(μ, σ 2 ) a normal distribution with mean μ and standard deviation σ. For common factors, we used the where ϕ i is an autoregressive coefficient, and σ i is the standard deviation of white noise for the ith common factor. Note that ψ it reflects the overall trend for i = 1 and mid-level trends for i 2 M = {2, 3, 4}. Bottom-level time series were produced by combining the overall trend, mid-level trends, and autocorrelation. We denote the parent (mid-level) node of node i as where ρ i and θ i respectively indicate effects of the common factors ψ 1t and ψ m(i), t on the ith time series. After that, we generated upper-level time series (y it for i 2 N\B) according to the aggregation constraint (2). We prepared three synthetic datasets: NgtvC, WeakC, and PstvC. Table 1 lists the parameter values used to generate these datasets. Time series are negatively correlated in the NgtvC dataset, weakly correlated in the WeakC dataset, and positively correlated in the PstvC dataset. Each dataset consists of time series data at 100 time points; the first 70 and latter 30 times were used as training and test periods, respectively.
We standardized the generated time series according to the mean and variance over the training period when using artificial neural networks. Specifically, we standardized each bottom-level time series for NN+BU and NN+SR and summed them appropriately to calculate upper-level time series for NN+SR. We standardized each time series at all levels for NN +MinT. We then computed predictive values for these time series and transformed the obtained predictive values into the original (unstandardized) scale. After that, we applied the bottom-up method (NN+BU and NN+SR) and the forecast reconciliation method

PLOS ONE
Prediction of hierarchical time series using structured regularization (NN+MinT) to make coherent forecasts. Finally, we calculated RMSEs on the original scale to evaluate prediction performance.

Results for synthetic datasets
Tables 2-4 list the out-of-sample RMSE values provided by each method for each node in the NgtvC, WeakC, and PstvC datasets. In the tables, the rows labeled "Mid-level" and "Bottomlevel" show the average RMSE values over the mid-and bottom-level nodes, respectively, with  (Table 2), our structured regularization method NN+SR was comparable to the forecast reconciliation method and outperformed the other methods, except for the RMSE of the root node. For the WeakC dataset (Table 3), our method was slightly inferior to the exponential smoothing method, but the differences were minimal. For the PstvC dataset (Table 4), our method attained the best prediction performance in terms of average RMSE. These results show that our structured regularization method delivered good prediction performance for the three synthetic datasets. Our method was especially effective when the time series were strongly correlated, as in the NgtvC and PstvC datasets.
We next focus on the parameter values for our structured regularization. Only for the PstvC dataset, our method NN+SR(λ 1 , λ M ) adopted λ 1 > 0 and performed significantly better than the bottom-up method in terms of the RMSE of the root node. Additionally, our method employed λ M > 0 for all three datasets and outperformed the bottom-up method for mid-level RMSEs. These results show an association between regularization weights and prediction accuracy at each time series level. Our method adjusts the regularization parameters to fit the data characteristic, thereby achieving better prediction performance. Fig 5 shows the training RMSE values as a function of the epoch (number of iterations) in the backpropagation algorithm for the synthetic datasets. Note that the computational efficiency can be evaluated based on epochs because little difference existed between NN+SR and NN+BU in the computation time required for one epoch.
RMSEs decreased faster for our structured regularization method NN+SR than for the bottom-up method NN+BU. The convergence performance of the two methods greatly differed, especially for the PstvC dataset and upper-level time series. Consequently, our structured regularization method improved both prediction accuracy and convergence speed of the backpropagation algorithm. This suggests that our method will deliver good prediction performance even if the backpropagation algorithm is terminated in the middle of computation.  Note that RMSE values were normalized in Fig 6 such that the RMSE for (λ 1 , λ M ) = (0, 0) was zero (white-colored) in each trial. Accordingly, the corresponding regularization is effective if the relative RMSE value is negative (red-colored), and it is ineffective if the relative RMSE value is positive (blue-colored). RMSEs were consistently reduced in the NgtvC dataset. Regularization was particularly effective for the root time series in the PstvC dataset. RMSE values tend to vary drastically from left to right in each heat map, which suggests that the regularization for the mid-level time series greatly impacted prediction performance.

Real-world datasets
We downloaded historical data describing unemployment rates in Japan from e-Stat, a portal site for official Japanese statistics (https://www.e-stat.go.jp/en). Using these data, we prepared three real-world datasets for Japanese regions: Tohoku, Chubu, and Kansai. Table 5 lists the prefectures forming the two-level hierarchical structure (Fig 3).
We used quarterly statistics (model-based estimates) of unemployment rates during 90 time periods from January 1997 to June 2019, taking the first 60 and last 30 time periods as the training and test periods, respectively. As a preprocessing step, we removed seasonal and trend components by means of the stl function in the R stats package. We next calculated upper-level time series according to the aggregation constraint (2). After that, we standardized

Results for real-world datasets
Tables 6-8 list the out-of-sample RMSE values provided by each method for each node in the Tohoku, Chubu, and Kansai datasets. For the Tohoku dataset (Table 6), our structured regularization method NN+SR was comparable to the forecast reconciliation method and substantially outperformed the other methods. For the Chubu dataset (Table 7), our method attained the second-best value for average RMSE.
For the Kansai dataset (Table 8), our method greatly exceeded the prediction performance of the other methods. These results demonstrate that our structured regularization method achieved good prediction performance for the three real-world datasets.  our structured regularization method NN+SR than for the bottom-up method NN+BU. For the Tohoku and Chubu datasets, our method greatly accelerated convergence for upper-level time series. For the Kansai dataset, our method was superior to the bottom-up method in terms of both prediction accuracy and convergence speed. These results suggest that our structured regularization method improves the convergence performance of the backpropagation algorithm. Fig 8 shows heat maps of the out-of-sample relative RMSE values provided by our structured regularization method NN+SR(λ 1 , λ M ) for the real-world datasets. For the Tohoku

Conclusion
We proposed a structured regularization model for predicting hierarchical time series. Our model uses the regularization term for improving upper-level forecasts to correct bottom-level forecasts. We demonstrated the application of our model to artificial neural networks for time series prediction. We also developed a backpropagation algorithm specialized for training our model based on artificial neural networks. We investigated the efficacy of our method through experiments using synthetic and realworld datasets. The experimental results demonstrated that our method, which can adjust regularization parameters to fit data characteristics, compared favorably in prediction performance with other methods that develop coherent forecasts for hierarchical time series. Our regularization term accelerated the backpropagation algorithm. Regularization for mid-level time series was closely related to prediction performance.
One contribution made by this study is the establishment of a new computational framework of artificial neural networks for time series predictions. Moreover, our experiments using synthetic and real-world datasets demonstrated the potential of specialized prediction https://doi.org/10.1371/journal.pone.0242099.g008 methods for hierarchical time series. We hope that this study will stimulate further research on exploiting structural properties for better time series predictions.
In future studies, we will extend our structured regularization model to other time series prediction methods, such as the autoregressive integrated moving average model [6,7] and support vector regression [8]. Another direction of future research will be to develop a highperformance estimation algorithm for our method based on various mathematical optimization techniques [45][46][47][48][49][50].