Assessment of resampling methods for causality testing: A note on the US inflation behavior

Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.


Introduction
Connectivity analysis of multivariate time series is a rapidly growing branch of interest with applications in different fields, such as economy, climatology and brain dynamics. A variety of methods have been developed that uncover complex dynamical structures, i.e. analysis of a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 other types of surrogates such as the twin surrogates [41], have been extensively suggested in applications, e.g. see [42][43][44][45].
In this work, we make an explorative study on resampling time series for the H 0 of no causal effect and compare seven resampling techniques with regard to the size and power of the significance test, using the PTE as test statistic. Specifically, we combine two resampling techniques: 1) the time-shifted surrogates [40] and 2) the stationary bootstrap [38], with three independence settings of the time series adapted for the non-causality test (giving six resampling methods): A) resampling only the time series of the driving variable, B) resampling independently the driving and the response time series, and C) resampling separately the driving and the response time series, while destroying the dependence of the future and past of the response variable. To the best of our knowledge, schemes B) and C) in conjunction with randomization or bootstrap have not been considered in any methodological study or application. We also introduce a new (seventh) method by bootstrapping contemporaneously the driving and the response time series. In this case, the bootstrap PTE values are centered to zero since the H 0 of no causal effects is not satisfied.
The empirical distribution of PTE, as well as the size and power of the significance test, for the seven resampling methods are assessed in a simulation study. Some first results on the aforementioned resampling methods have been already presented in [46] and [47].
Here, we extend the study of the examined resampling methods in order to establish their performance.
Finally, to demonstrate the performance of PTE in conjunction with the seven resampling methods using real data, we investigate the possible sources of the US inflation in the post-Volcker era utilizing two 3-variate systems built on the Consumer Price Index for All Urban Consumers, the core CPI, the money supply and the price of crude oil. Empirical results support evidence in favor of a statistically significant direct causal relationship between oil prices and US inflation obeying dynamics which are not comparable with the oil episodes occurred in the 1970s.

Partial transfer entropy
The TE quantifies the amount of information explained in a response variable Y at one time step ahead from the state of a driving variable X accounting for the concurrent state of Y. Let {x t , y t }, t = 1, . . ., n be the observed time series of two variables. We define the reconstructed state space vectors of the variables as x t = (x t , x t − τ , . . ., x t − (m − 1)τ ) 0 and y t = (y t , y t − τ , . . ., y t − (m − 1)τ ) 0 , where m is the embedding dimension and τ the time delay. The TE from X to Y constitutes the conditional mutual information I(y t+1 ; x t |y t ) given as [6] TE X!Y ¼ Iðy tþ1 ; x t jy t Þ ¼ P pðy tþ1 ; x t ; y t Þ log pðy tþ1 jx t ; y t Þ pðy tþ1 jy t Þ ¼ Hðx t ; y t Þ À Hðy tþ1 ; x t ; y t Þ þ Hðy tþ1 ; y t Þ À Hðy t Þ; where TE is expressed either based on the probability distributions, p(Á) (here being defined for the discretized variables), or the entropy terms, H(Á), where H(x) = − R f(x)log f(x)dx is the differential entropy of the vector variable x with probability density function f(x). We note that m and τ are set to be similar for both variables as suggested in [29].
The partial transfer entropy (PTE) is the multivariate extension of transfer entropy (TE) in [8,9]. The PTE accounts for the direct coupling of X to Y conditioning on the remaining variables of a multivariate system, collectively denoted by Z. It is defined as PTE X!YjZ ¼ Iðy tþ1 ; x t jy t ; z t Þ ¼ Hðx t ; y t ; z t Þ À Hðy tþ1 ; x t ; y t ; z t Þ þ Hðy tþ1 ; y t ; z t Þ À Hðy t ; z t Þ: ð2Þ The estimation of PTE relies on the estimation of the joint probability density functions in the expression of the entropies. Different types of estimators for the TE and PTE exist, such as histogram-based (e.g. by discretizing the variables to equidistant intervals [48]), kernel-based [49] and using correlation sums [50]. In this paper, we choose the nearest neighbor estimator [51], which is specifically effective for high-dimensional data [18]. This estimator uses the distances between the reconstructed state space vectors to estimate the joint and marginal densities. For each reference point, viewed in the largest state space, the distance length is defined as the distance to the k-th nearest neighbor. Then densities, at projected subspaces, are locally formed by the number of points within from each reference point. Thus, the free parameter in the estimation of entropies is the number of neighbors k.
Theoretically, the causality measures including the PTE should be zero in the case of no causal effects. However, various issues such as the estimation method for the entropies and subsequently densities, the selection of the embedding parameters, the finite sample size and the inherent dynamics of each subsystem [29] may introduce bias. In order to determine whether a PTE value indicates a weak coupling or it is not statistically significant, resampling methods are employed.

Resampling methods
Our examined null hypothesis H 0 is that there is no direct causal effect from X to Y or more specifically that PTE X ! Y|Z = 0, i.e. I(y t+1 ; x t |y t , z t ) = 0. In order to generate resampled time series representing the H 0 , we consider two resampling techniques, i.e. 1) the time-shifted surrogates and 2) the stationary bootstrap, and combine them with three independence settings. Thus six resampling methods (cases 1A to 2C) are formulated to test the H 0 . In addition, we introduce a seventh resampling method that is based on the stationary bootstrap and does not directly comply with the H 0 .
Resampling techniques. 1) Time-shifted surrogates. Let us consider two variables X and Y and their corresponding time series {x 1 , . . ., x n } and {y 1 , . . ., y n }. The time-shifted surrogates are generated, so that they preserve the dynamics of the original time series, i.e. {x 1 , . . ., x n }, while the couplings between X and Y are destroyed [40]. They are formed by cyclically timeshifting the components of a time series. In more details, for the time series {x 1 , . . ., x n }, an integer d is randomly chosen and the d first values of the time series are moved to the end, giving the time-shifted surrogate time series fx Ã t g ¼ fx dþ1 ; . . . ; x n ; x 1 ; . . . ; x d g. The random number, d, is randomly drawn from the discrete uniform distribution in the range [0.05n; 0.95n] in order to maintain disruption of the time order of the original time series even in the presence of strong autocorrelation.
2) The stationary bootstrap. The stationary bootstrap was introduced in [38] to adapt bootstrap on correlated data. By construction, the stationary bootstrap does not destroy the time dependence of the data. This method tries to replicate the correlations by resampling blocks of data. The lengths of the resampled blocks have a geometric distribution. For a fixed probability p, block lengths L i are generated with probability p(L i = k) = (1 − p) (k − 1) p for k = 1, 2, . . .. The starting time points of the blocks I i are drawn from the discrete uniform distribution on {1, . . ., n − k}. A bootstrap time series fx Ã t g is formed by first starting with a random block as defined above B I 1 , L1 = {x I 1 , x I 1 +1 , . . ., x I 1 +L1 − 1 }, and blocks are added until length n is reached.
Independence settings. The three independence settings presented below regard both time-shifted surrogate and stationary bootstrapped time series.
A. The first setting is to resample only the time series of the driving variable X. This constitutes the standard approach for the surrogate test for the significance of causality measures [18,40,52,53]. The intrinsic dynamics of the variable X is preserved in the resampled time series fx Ã t g but the coupling between X Ã and Y is destroyed. So, H 0 is approximated and PTE X Ã ! Y|Z = 0. The variables X and Y as well as X and Z are independent, however the pair of variables (Y, Z) preserves its interdependence.
B. This second scheme resamples both the driving variable X and the response variable Y, i.e. the resampled time series fx Ã t g and fy Ã t g are generated. Again, the intrinsic dynamics of both X and Y are preserved but the coupling between them is destroyed, so that PTE X Ã ! Y Ã |Z = 0. Here, independence holds for all variable pairs (X, Y), (Y, Z) and (X, Z). Nevertheless, there is still no complete independence between all arguments in the definition of PTE, as y t+1 preserves by construction of fy Ã t g its dependence on y t . C. The third scheme establishes complete independence of all the terms involved in the definition of PTE, i.e. in addition to the resampling of X and Y, also y t+1 is resampled separately. Technically, we first form the reconstructed vectors of X and Y and then we randomly shuffle them independently for each time series. In this way, the time dependence is destroyed between y t+1 , x t and y t and therefore they become independent. Further, z t becomes independent of x t , y t but not of y t+1 .
The seventh resampling method uses stationary bootstrap to resample contemporaneously the driving and the response time series (X, Y). The resampled time series are not consistent to H 0 because the coupling of X and Y is not destroyed. In order to obtain an accurate sampling distribution of the mean of the test statistic one can take into consideration that the mean value of the test statistic is zero under H 0 . The idea is that ffiffiffi n p (PTE-E(PTE)), where E(PTE) is the mean of PTE, can be distributed similarly for series that comply to H 0 (E(PTE) = 0) and series that do not (E(PTE) >0); it is assumed that ffiffiffi n p (PTE-E(PTE)) tends to the normal distribution with zero mean and known variance [38]. Since our goal is to compare the different resampling methods, no results for this approximation of the true distribution are discussed. By centering the distribution of the bootstrap PTE values around zero, we get an approximation of the null distribution of PTE. Thus, this resampling method can be employed to test H 0 , provided that the null distribution of the bootstrap values of the test statistic is shifted to have mean zero. It is labelled as 2D to stress that it the fourth setting for the stationary bootstrap.

Simulation study
We apply the significance test for the PTE with the seven resampling methods to multiple realizations of various simulation systems. Specifically, we estimate the PTE from 1000 realizations per simulation system. For each realization and each resampling method, M = 100 resampled time series are generated. Let us denote q 0 the PTE value from one realization of a system and q 1 , q 2 , . . ., q M the PTE values from the resampled time series for this particular realization and for a specific resampling method. The rejection of H 0 of no causal effects is decided by the rank ordering of the PTE values computed on the original time series, q 0 , and the resampled time series, q 1 , . . ., q M . For the one-sided test, if r 0 is the rank of q 0 when ranking the list q 0 , q 1 , . . ., q M in ascending order, the p-value of the test is 1 − [(r 0 − 0.326)/(M + 1 + 0.348)], by applying the correction in [54].
We consider two time series lengths: n = 512 and 2048. The calculation of the PTE relies on the phase space reconstruction [56,57]; specifically for PTE see [8]. Since all the simulation systems are discrete in time we set the time delay τ equal to one, while the embedding dimension m is identical for all variables, which is reported to be the best strategy [29], and for each system it is set according to its complexity, i.e. taking into account the maximum delay in the equations of each system. The number of nearest neighbors for the estimation of the probability distributions equals 10 (the choice of k does not substantially affect the estimation of PTE [53,58]).
To investigate the performance of the significance tests for the PTE with the different resampling methods, we use the sensitivity of the PTE, i.e the percentage of rejection of H 0 when there is true direct causality, as well as the specificity of the PTE, i.e. the percentage of no rejection of H 0 when there is no direct causality, at the significance level α = 0.05. The notation X 2 ! X 1 |Z denotes the Granger causality from X 2 to X 1 , accounting for the presence of confounding variables Z = X 3 , . . ., X K , where K is the number of observed variables. For brevity, we use the notation X 2 ! X 1 instead of X 2 ! X 1 |Z, implying the conditioning on the confounding variables. The same holds for the remaining pairs of variables. System 1. The PTE is negatively biased; the mean PTE values from the 1000 realizations at all directions are negative when c = 0 (Table 1). For c = 0.3 and c = 0.5, it is larger when direct couplings exist (X 1 ! X 2 , X 2 ! X 3 ) and raises with n. Regarding the indirect coupling X 1 ! X 3 , the PTE slightly increases with n as c increases, reaching the highest mean value for c = 0.5 (mean PTE X 1 ! X 3 = 0.0004 for n = 512 and PTE X 1 ! X 3 = 0.0071 for n = 2048). For the rest of the couplings, the PTE is negative at the same level regardless of c or n. The occurrence of many negative values of the PTE indicates the need for a significance test.
We evaluate how the null distribution of the PTE from the seven resampling methods differs with respect to the original PTE values. For c = 0, all of them correctly indicate the absence of couplings as the percentage of rejection at a = 0.05 is not larger than 5% (Table 2). Considering c = 0.3, the true couplings are identified again. However, spurious and indirect couplings are indicated as well for the setting A and less for B. Additionally, similar performance is observed when the coupling strength is strong (c = 0.5) and large percentages are obtained for the indirect coupling X 1 ! X 3 in all schemes.
The sensitivity of PTE is assessed from the two true causal links, i.e. X 1 ! X 2 and X 2 ! X 3 since we calculate the proportion of 'positives' (true causal links) that are correctly identified. A high sensitivity is established by a high percentage of significant PTE values over the 1000 realizations for these two couplings, which means that the PTE correctly detects the true causal effects. Similarly, the specificity of PTE is decided by the percentage of the significant PTE for Table 1. Mean PTE values from 1000 realizations of system 1 for n = 512 and 2048, highlighted at the directions of the true couplings.    Concerning the first six resampling methods, the percentage of erroneously rejected H 0 for non-existing or indirect couplings tends to increase with c and the time series length n, the most robust being 1C and 2C. It turns out that when the resampled time series become more independent (from A to C), the percentage of spurious couplings decreases. This is so because the null distribution for the test is somewhat more spread and displaced to the right as the resampling changes from the least independent scheme (setting A) to the most independent one (setting C) (Fig 1).
The resampling method 2D seems to be the most effective one as it attains the same highest percentage of rejection for true direct couplings and the lowest percentage of rejection for no direct coupling. We note that the green dots are not displayed in Fig 1a because they exceed the axis and we kept the same range of PTE values (y-axis) in all subfigures in order to be able to straightforwardly compare the different cases.
We are interested in the spread of the resulting surrogate null distribution. Thus, we display some indicative results for the mean value of the means and standard deviations of the surrogate PTE values over all the realizations for the direction X 1 ! X 2 and for time series length n = 512 in Table 3. The more independent setting we consider (from A to B to C), the greater the median and the mean (as shown in Fig 1 and Table 3, respectively) and the larger the spread of the distribution of the surrogate PTE values, while case 2D features one of the greatest spreads. System 2. The mean PTE values from 1000 realizations of the second system are all positive and the PTE for the directions of the true couplings is larger, with the exception of X 2 ! X 3 being at the level of no direct coupling and not significantly increasing with n ( Table 4). The level of the PTE for the uncoupled directions varies from 0.0014 to 0.0097 and decreases with n.
The true couplings X 2 ! X 1 , X 1 ! X 3 , X 4 ! X 2 are well established by the significance test ( Table 5). The weak coupling X 2 ! X 3 is detected only by the setting A (1A and 2A), with the power of the test increasing with n. No spurious causalities are identified by the first six resampling methods (percentage of significant PTE varies from 0% to 6% at the uncoupled directions), however method 2D identifies wrongly the couplings X 2 ! X 4 and X 3 ! X 4 , giving much higher percentage than the nominal size 5%. The surrogate/bootstrap PTE values seem to increase as the resampled time series become more independent. This can be clearly observed when comparing settings A and B, as shown in Fig 2 for the strong coupling X 2 ! X 1 and Fig 3 for the weak coupling X 2 ! X 3 . The bootstrap PTE values for method 2D are centered around zero by construction, while the surrogate/bootstrap PTE values for the other six resampling methods are positively biased. Their distribution becomes wider as the resampling method gets more independent (A to C), with method 2D having the wider one. The latter performs poorly because the distribution of the bootstrap PTE values is much wider compared to the other ones and the original PTE value falls within the tail of this distribution (Fig 3, case 2D). System 3. The mean PTE values from 1000 realizations of the third system are presented in Table 6. Slightly negative PTE values are obtained at the uncoupled directions, while some   positive ones come up for the directions of the true couplings. Positive values are estimated for large coupling strength and indirect causal effects (e.g. X 2 ! X 4 ), but they are much smaller compared to those for direct causal effects. No couplings are found in the uncoupled case (c = 0) for system 3 ( Table 7). A Table including the percentage of significant PTE values for system 3 for all the directions is available as a Supporting file (S1 Table). The percentage of significant PTE values range from 0% to 5.6% for all the resampling methods and both time series lengths. The PTE is also effective when couplings are present. When c = 0.2, its sensitivity increases with n, and when c = 0.4 the highest sensitivity tends to be obtained even for small n.
The results for method 2D are similar to methods 1C and 2C. All the true couplings are well identified, while spurious couplings are found at a percentage higher to 5% only in three instances for c = 0.4 and n = 2048: X 1 ! X 3 (5.8%), X 2 ! X 4 (9.4%) and X 3 ! X 5 (15.4%).
As resampled time series become less dependent, we observe a loss in the power of the test for n = 512, especially when couplings are not very strong. Regarding the size of the test, for c = 0.2 the percentage of rejections for indirect (e.g. X 2 ! X 4 ) or no coupling (e.g. X 5 ! X 4 ) is modestly above the 5% level only for 1A and 2A, while for c = 0.4 is substantially higher for 1A and 2A and lower for 1B and 2B. For example, we obtain for scheme 1A and n = 2048: 50.5% for X 1 ! X 3 (indirect coupling), 22.2% for X 2 ! X 1 (no coupling), 56.8% for X 2 ! X 4 (indirect coupling), 19.7% for X 3 ! X 2 (no coupling), 62.2% for X 3 ! X 5 (indirect coupling), 22.9% for X 4 ! X 3 (no coupling) and 14.1% for X 5 ! X 4 (no coupling). Respective results are indicated by the scheme 2A. When considering more independent resampled time series, the corresponding percentages of indirect and no couplings decrease, e.g. for method 1B and n = 2048: 27.5% for X 1 ! X 3 , 20% for X 2 ! X 1 , 21.4% for X 2 ! X 4 , 3.7% for X 3 ! X 2 , 28% for X 3 ! X 5 , 4.1% for X 4 ! X 3 and 4.7% for X 5 ! X 4 . Similar results are observed for 2A. The correct test size, i.e. the probability of falsely rejecting the null hypothesis being close to α = 0.05, is attained only with the resampling methods of type C; the percentage of the significant PTE values for the uncoupled cases varies from 0% to 4.7% for both 1C and 2C and both n, while spurious causality is detected for cases A and B. As n and c increase, the percentage of those spurious indications increases.

Application
In the effort to provide further evidence on the possible sources of US inflation in the post-Volcker era, we will try to gain insights from the application of the PTE by employing the aforementioned resampling methods. For this reason, we create two 3-variate systems of real economic variables, the first one consisting of monthly observations for the US Consumer Price Index for All Urban Consumers (CPI), the money supply (M2, Billions of Dollars) and the crude oil prices (West Texas Intermediate-Cushing, Oklahoma, Dollars per Barrel) while the second one is obtained by replacing CPI with the core CPI (Fig 4). The data are not seasonally adjusted and the sample spans from 01-01-1986 to 01-02-2014. We used the longer available sample at the time the application is implemented in order to ensure PTE accuracy. Since in the post-2009 period, US inflation reached very low values in association with the QE strategy of the Federal Reserve, we strongly believe that our findings over the period of interest (i.e. before the crisis of 2007-2009) are not qualitatively affected. To assess the impact of restricting the sample until 2009, we re-estimated the PTE for both systems. In the first case, we observe a feedback between CPI inflation and crude oil changes, while for the 2nd system identical causal relationships appear. Prices are transformed into growth rates by using their first logarithmic differences to give inflation (Y 1 ) in the case of CPI, core inflation (Y 11 ), M2 returns (Y 2 ) and oil price changes (Y 3 ). For the assessment of the statistical significance of the PTE we look back at the seven resampling methods mentioned above. The embedding dimension m for the estimation of the PTE is set equal to one (m = 1), often used in log differenced data expecting to have very short memory [59] and the number of nearest neighbors is ten (k = 10).
The empirical findings from the application of the PTE on the 1st 3-variate system consistently reveal the coupling oil (Y 3 ) ! inflation (Y 1 ). The fact that this linkage becomes statistically insignificant when the core CPI inflation is taken into account, is an indication that the observed inflation in the post-1986 period cannot be interpreted with means of traditional cost-push mechanisms. Table 8 displays the connectivity results based on each of the seven resampling methods, where statistically significant probabilities are given in bold (when pvalue <0.05). Aside from the link Y 3 ! Y 1 , few additional links also appear. Since we do not obtain any consistency, these sparse links may be due to estimation biases, the method of https://doi.org/10.1371/journal.pone.0180852.g003  Table 7. As Table 2 but for system 3 for the true couplings, an indirect coupling (X 2 ! X 4 ) and an uncoupled case (X 5 ! X 4 ). assessing its statistical significance or existence of high noise in the data. Table 9 presents the results for the 2nd 3-variate system, where the CPI inflation has been replaced by the core CPI inflation. As it can been seen, the influence of crude oil to core inflation is not statistically significant and new relationships emerge. We detect a persistent causal feedback between core    [60] conclusion that the demand-fuelled oil price rises in the 2000s have been accommodated by economic policy. The relationship between crude oil and consumer price index has been determined dynamically over the past 50 years. The strength of the linkage seems to vary conditionally to several factors including the nature of oil shocks, the response of monetary policy and the rigidities in the labor market. In the 1970s the oil price shocks of 1973 and 1979 were associated with significant reductions in OPEC supply. In the early of middle 1980s starts a phase of stability for the US economy, known as The Great Moderation, characterized by low volatility in inflation and output. Oil prices however become more volatile again from the second half of the 1990s until mid-2008. While the oil shock episodes in 1973 and 1979 coincide with an increase in the US inflation and the beginning of rising unemployment, the variation of these two variables becomes smaller in size during the episodes of 1999-2000 and 2002-2007. Whereas the stable core CPI in the post-1984 period, [61] show that the relative contribution of oil shocks to CPI inflation has increased since oil price changes have passed through the energy component of CPI. This lack of significant second-round effects on core inflation via cost-push mechanisms puts forward the difference in the effects of oil prices in the 1970s and the 2000s. Oil prices are not only affected by disturbances in supply. Oil shocks can be the consequence of technological changes or financial innovation able to affect consumers' demand for oil. According to [62] the oil price increase between 2009 and mid-2008 was driven by global demand shocks and as such it was not associated with recessionary dynamics of the US economy. Going further, [63] defines oil price fluctuations as symptoms of the underlying oil demand and oil supply shocks and conclude that disentangling between these two sources can prevent from unnecessary monetary policy interventions.

Conclusion
This study stems from the necessity to derive an effective causality test for the investigation of the connectivity structure of a multivariate complex system. Specifically, we investigate how the performance of a (direct) causality test is affected by the scheme generating the resampled data [29,39,47]. Our contribution is two-fold, with respect to the methodology and the application. Regarding the methodology, we introduce new resampling methods for the non-causality test. Regarding the application, we obtain coherent results based on the partial transfer Table 9. As Table 8 but for the 2nd 3-variate system including the core CPI inflation. entropy (PTE) and all the aforementioned resampling methods, highlighting the complex nature of oil shocks through their impact on inflation. The importance of assessing the statistical significance for the partial transfer entropy (PTE) has been explored via a simulation study. In the absence of direct coupling X ! Y|Z, by definition, the mutual information of X and Y conditioned on Z should be theoretically zero, i.e. I(Y; X|Z) = 0. The formulation of more independent resampled data (settings B and C) compared to the standard technique (setting A), all consistent to the null hypothesis I(Y;X|Z) = 0, seems to account better for the bias of the test statistic and helps prevent false detection of coupling in the case of the nonlinear coupled systems. The size and the power of the test are improved with settings B and C, especially if the direct couplings are strong. However, for large n and c, settings B and C may also give spurious couplings, such as for X 2 ! X 4 for System 3. We should also underline that the performance of PTE is affected by the number of observed variables [53]. On the other hand, when the coupled system is linear, independence setting A seems to be more efficient in identifying weak couplings. The method 2D is also effective for the nonlinear simulation systems and less effective for the linear coupled system, detecting spurious couplings.
It turns out that the PTE estimated on resampled time series increases with increasing level of randomness; i.e. the surrogate PTE values increase going from setting A to C. In addition, the spread of the surrogate PTE distribution gets larger, implying that smaller PTE values on the original time series are likely to be found statistically not significant and consequently less spurious couplings are detected. Figs 1-3 display the distribution of the surrogate PTE values for systems 1 and 2 with respect to each resampling scheme in order to visualize these findings. When we detect the true causality with high probability, we may also get spurious couplings. In order to avoid the detection of false connectivity, we may have a loss in sensitivity. This higher specificity comes at the cost of lower sensitivity, and vice versa. Thus, optimality is not achieved for any of the first six resampling methods, but it becomes clear that the significance test for the PTE gets more conservative as resampling is more random. Regarding method 2D, the bootstrap PTE values are centered by construction around zero and therefore it focuses on the spread of the distribution of the PTE on the bootstrapped data rather than the bias. For linear systems, the bias is larger and method 2D performs worse.
We note that the seven resampling methods have comparable computational cost as randomization procedures are involved at all cases in the same way. Further, they can be utilized for any test statistic in order to examine the null hypothesis of no causal effects. Ongoing research aims at further investigating the performance of various causality measures, gaining insight from the significant impact that the selection of alternative resampling techniques may have.
In the context of the application, using the PTE with all examined statistical significance tests, we confirm the stability of core inflation over the post-Volcker era including the period of Great Moderation. The strong causal influence of crude oil on the total CPI inflation and the absence of link with the core CPI inflation clearly highlight the contribution of oil demand shocks as opposite to the oil supply shocks in the 2000s that the US economy experienced in the 1970s.
Supporting information S1 Table. Percentage of significant PTE values for system 3 for n = 512/2048, for all resampling methods. A single number is displayed when the same percentage corresponds to both n. The true couplings are highlighted. (DOCX) S1 Dataset. The matlab codes for generating the corresponding simulation time series of the manuscript are provided as a Supplementary File. The financial time series from the real applications can be downloaded from the Federal Reserve Bank of Saint Louis at the following link: https://fred.stlouisfed.org/categories. (ZIP)