Early Warning Signals of Financial Crises with Multi-Scale Quantile Regressions of Log-Periodic Power Law Singularities

We augment the existing literature using the Log-Periodic Power Law Singular (LPPLS) structures in the log-price dynamics to diagnose financial bubbles by providing three main innovations. First, we introduce the quantile regression to the LPPLS detection problem. This allows us to disentangle (at least partially) the genuine LPPLS signal and the a priori unknown complicated residuals. Second, we propose to combine the many quantile regressions with a multi-scale analysis, which aggregates and consolidates the obtained ensembles of scenarios. Third, we define and implement the so-called DS LPPLS Confidence™ and Trust™ indicators that enrich considerably the diagnostic of bubbles. Using a detailed study of the “S&P 500 1987” bubble and presenting analyses of 16 historical bubbles, we show that the quantile regression of LPPLS signals contributes useful early warning signals. The comparison between the constructed signals and the price development in these 16 historical bubbles demonstrates their significant predictive ability around the real critical time when the burst/rally occurs.


Introduction
The daily actions resulting from the entangled interactions between investors in markets with ever more numerous financial innovations are the cause of the increasingly inherent complexity of price dynamics. This complexity is revealed through the occurrence of varied market regimes, from transient bubbles, to high volatility markets and prolonged market negative performance. The present theoretical knowledge and empirical methodologies are insufficient to fully capture the emerging risks. As financial markets provide both a measure of the health of the underlying economy and an engine for funding firms and catalysing growth, it is urgent to develop new approaches to describe the large price fluctuations and to develop testable diagnostics of financial bubbles. The present article aims at extending the approach pioneered in [1][2][3][4][5][6] to develop novel testable diagnostics of financial bubbles. Real time monitoring and timely early warning of finance bubbles are not only an important part of recent academic research to expand on the efficient market hypothesis. They are also motivated by concrete real life applications to possibly avoid financial crises and at least prepare against them to ensure a prompt and efficient response [7][8][9]. Various scientific platforms have been built to monitor asset prices and to study financial bubbles. Here, we build on the Financial Crisis Observatory at ETH Zurich (http://www.er.ethz.ch/financial-crisis-observatory.html), which has the goal of testing rigorously the hypothesis that financial markets exhibit a degree of inefficiency and a potential for predictability, especially during regimes when bubbles develop.
In general, normal times are characterised by an approximate constant return (or price growth rate). This is nothing but the statement that the average price trajectory is a noisy exponential that reflects the power of compounding interests. As the simplest embodiment of this noisy exponential growth, the Geometrical Brownian Motion model is the starting point of more sophisticated models in financial mathematics and financial engineering. However, financial markets often deviate strongly from such simple description in the form of bubbles, defined as periods in which asset prices strongly deviate from the corresponding fundamental value. One of the practical problems of bubble identification is that the fundamental value is not directly observable and is roughly estimated within a factor of 2 [10], typically. Based on the analyses of many historical bubbles, the studies [1][2][3]11] have documented that there are transient regimes during which the price growth rate (return) grows itself, which translates into a super-exponential time dynamics. Such a procyclical process involving positive feedbacks, which can be of many types, such as option hedging, portfolio insurance strategies, margin requirements, as well as the imitation and herding behavior in psychology. These mechanisms tend to increase and accelerate the deviation from an equilibrium. The resulting super-exponential price trajectories are inherently unsustainable and often burst as crashes or strong corrections. In a nutshell, the existence of a transient faster-than-exponential price growth can be taken as a signature of bubbles [6,11,12]. The advantage of this definition of a bubble is that it does not rely on the estimation of what is a fundamental value (see e.g., [13]), which is poorly known as mentioned above.
The Log-Periodic Power Law Singularity (LPPLS) model has been proposed as a simple generic parameterisation to capture such super-exponential behavior [1][2][3][4], which is inspired from physics (and is sometimes referred to as part of econophysics [14]). This model takes into account that positive feedbacks generically lead to finite-time singularities [9,15,16]. Moreover, it includes log-periodic oscillations decorated by accelerating oscillations, which are the observable embodiment of the symmetry of discrete scale invariance [17]. This generic logperiodicity accounts for the existence of a discrete hierarchy of group sizes [18] and may also result from the interplay between nonlinear value investors and nonlinear trend followers, and the inertia between information flow and price discovery [15]. In summary, the LPPLS model provides a convenient representation of financial bubbles.
As mentioned above, the LPPLS model is the simplest analytical formulation of time series that possess a discrete regular hierarchy of time scales [17]. It is a particularly useful tool among the large set of concepts and methods dealing with multi-scale analysis of mono-and multi-variate time series, which include temporal multifractal analysis [19][20][21][22], directed weighted network representations of time series using the delayed coordinate embedding method combined with a distance that provides an adjacency matrix [23][24][25][26], and a variety of techniques at the intersection of nonlinear dynamical system theory, statistical time series analysis, fractals, cellular automata, machine learning methods, wavelet transform methods, fuzzy logic and more [27,28].
We thus follow up on these previous efforts to diagnose financial bubbles and their terminations by proposing several innovations. First, rather than using the standard least squares or maximum likelihood calibration method, we apply the quantile regression method to the LPPLS calibration problem. In other words, rather than fitting a given log-price time series by a single LPPLS model, quantile regressions provide a family of calibrated curves indexed by the probability level q. Scanning q between 0 and 1 allows us to disentangle (at least partially) the genuine LPPLS signal from the a priori unknown complicated residuals. Moreover, this new technology alleviates some of the statistical problems that have plagued the literature: error in variables, sensitivity to outlier and non-normal error distributions [29]. It provides a descriptive approach reporting more than just the expected mean of a conditional distribution, but may also discover more complete structures without imposing global distributional assumptions on the residuals. In contrast, the standard least squares or maximum likelihood estimation procedures are vulnerable to the existence of outliers [30]. In sum, the prediction inference associated with quantile-based estimates has an inherent distribution-free character since they are influenced only by the local behavior of the underlying distribution near the specified quantile [31]. The different q-dependent LPPLS fits also provide a bundle of possible scenarios that are compatible with different weights of the residuals supposed to decorate the theoretical driver.
While the implementation of ensemble forecasting from quantile estimates is still in its infancy, we apply the ensemble forecasting obtained from the quantile regressions at various q values to construct early warning signals. This is proposed to improve on the common practice of relying on one single calibration to make forecasts. This provides a representative sample of the possible future states in order to improve generalization and robustness compared with single estimators [32]. On average, the combined estimator is usually better than any of the single base estimator because its variance is reduced. The median of individual estimates is more accurate than at least half of the individual forecasts [33].
Then, we propose to combine the many quantile regressions with a multi-scale analysis. This leads to the development of ensemble forecasting that combines a grid of quantile-based estimators into a final aggregated predictor. We further introduce the Quantile-Violin plots and the dt-Violin plots as powerful representations of the enormous amount of information generated by scanning the quantile levels and the time scales.
Finally, we define and implement the so-called DS LPPLS Confidence™ and Trust™ indicators, which provide an aggregation and consolidation of the wealth of generated information and we put them at work to diagnose 16 historical bubble cases. Positive bubbles and negative bubbles can be respectively identified from the performance of these systemic indicators.
We proceed as follows. Section 2 presents the LPPLS model and gives an overview on some theoretical aspects of the standard ordinary least square regression (referred to as the L 2 norm calibration) and of quantile regressions. Section 3 presents the metrics, the methodology and a battery of tests performed on the S&P 500 bubble that burst in October 1987. In particular, we introduce the Quantile-Violin plots and the dt-Violon plots as efficient presentations of the multi dimensional metrics. Section 4 extends section 3 to three other historical financial bubbles, providing interesting elements of comparison. Section 5 introduces the DS LPPLS Confi-dence™ and Trust™ indicators. Section 6 applies all the above tools and metrics to 16 historical financial bubbles and compare the indicators with the price time series. Section 7 summarises our main conclusions.

Log-Periodic Power Law Singularity (LPPLS) model
The Johansen-Ledoit-Sornette (JLS) model [2,3] assumes that the asset price p(t) follows a standard diffusive dynamics with varying drift μ(t) in the presence of discrete discontinuous jumps: where σ(t) is the volatility and dW is the increment of a Wiener process (with zero mean and variance equal to dt). The term dj represents a discontinuous jump such that j = 0 before the crash and j = 1 after the crash occurs. The loss amplitude associated with the occurrence of a crash is determined by the parameter κ. Each successive crash corresponds to a jump of j by one unit. The dynamics of the jumps is governed by a crash hazard rate h(t). Since h(t)dt is the probability that the crash occurs between t and t + dt conditional on the fact that it has not yet happened, we therefore have the expectation By the no-arbitrage condition leading to the condition that the price process is a martingale Under the assumption of the JLS model, the crash hazard rate aggregated by the noise traders with herding behaviors has the following dynamics: Using μ(t) = κh(t), we obtain the dynamics of the expectation of the logarithm of the price in the form of the Log-Periodic Power Law Singularity (LPPLS) model: where t c denotes the most probable time for the burst of the bubble, in the form of a crash for example. The constant A = ln[p(t c )] gives the terminal log-price at the critical time t c .
respectively control the amplitude of the power law acceleration and of the log-periodic oscillations. The exponent m quantifies the degree of super-exponential growth. The log-periodic angular frequency ω is related to a scaling ratio λ ¼ exp 2p o À Á of the temporal hierarchy of accelerating oscillations converging to t c . Finally, ϕ 2 (0, 2π) is a phase embodying a characteristic time scale of the oscillations. Eq (3) is the firstorder log-periodic correction to a pure power law for an observable exhibiting a singularity at t c [4,34].
Given the starting and ending dates t start and t end of the fitting window, we define dt ≜ t end À t start as the duration of the fitting window. The critical time t c is searched in the interval [t end − ηdt, t end + ηdt], with η is typically equal to 0.20. Previous calibrations of the LPPLS specification Eq (3) to the log-price development during a number of historical financial bubbles have suggested to qualify fits based on the parameters of the LPPLS model belonging to the following intervals [5,35,36] The optimization problem using the standard Ordinary Least Squares (OLS) method Filimonov and Sornette [36] suggested to expand the cosine term of Eq (3) with C 1 = C cos ϕ, C 2 = −C sin ϕ to obtain a representation with 4 linear and 3 nonlinear parameters, providing a substantial gain in efficiency and stability of the calibration. This leads to rewrite Eq (3) as ln E½pðtÞ ¼ A þ Bjt c À tj m þ C 1 jt c À tj m cos ðo ln jt c À tjÞ þ C 2 jt c À tj m sin ðo ln jt c À tjÞ: ð4Þ The optimization problem with the standard Ordinary Least Squares (OLS) method aims to minimize the sum F(t c , m, ω, A, B, C 1 , C 2 ) of squared residuals between the log-price ln p(t i ), i = 1, 2, . . ., N and Eq (4), where The optimization problem using the Quantile Regression calibration method Intuitively, the OLS calibration method is finding the best fit "in mean". In other words, the parameters are adjusted so that the function to calibrate is the closest to the mean of the noisy realisation of the log-price, where the mean should be considered conceptually to occur over many realisations of the noise decorating the supposed theoretical function Eq (4). If the noise is not normally distributed and exhibits heavier tails, the OLS calibration may be contaminated by large deviations of the noise from the mean. Then, fitting the data to the function that is the closest to the median of the noisy realisation of the log-price may be more adequate and lead to more stable estimations. It is well known that this amounts to replacing the L 2 norm (sum of the square of the differences) in Eq (5) by the L 1 norm (sum of the absolute value of the differences). Quantile regressions amount to generalizing the minimisation of the L 1 norm and provide not just a single best fit to the median but a bundle of best fits to the different quantile realisations of the noise around the theoretical LPPLS function Eq (4). First, let us recall that the qth quantile of a random variable Y with distribution function F Y (y) = P(Y y) is defined as Let us define the q-dependent loss function with respect to residual e t : For q = 1/2, r 1=2 ðe t Þ ¼ 1 2 je t j, so minimizing ρ 1/2 (e t ) is nothing but minimising the L 1 norm. Quantile regression corresponds to finding the quantile-dependent parameters f b t c ðqÞ, mðqÞ,ôðqÞ,ÂðqÞ,BðqÞ, b C 1 ðqÞ, b C 2 ðqÞg that minimise the function In other words, for each quantile level q, we obtain a set of q-dependent calibrated parameters f b t c ðqÞ;mðqÞ;ôðqÞ;ÂðqÞ;BðqÞ; b To significantly decrease the complexity of the search and provide an intuitive representation of the results of the calibration, a two-stage fitting procedure is developed according to the special structure of the LPPLS model [36]. That is, according to Eq (4), the complexity of the optimization problem is reduced by slaving 4 linear parameters to the 3 nonlinear parameters. In essence, for minimizing the objective function of the OLS or Quantile Regressions, the linear parameters {A, B, C 1 , C 2 } or {A(q), B(q), C 1 (q), C 2 (q)} are determined using the LU decomposition algorithm through a linear regression model, while the nonlinear parameters {t c , m, ω} or {t c (q), m(q), ω(q)} are searched globally through the Taboo search followed by the Quasi-Newton method with line search.
From the definition in Eqs (7) and (8), one can see that the quantile regression is an asymmetrically weighted L 1 -based regression, where the asymmetry is governed by the value q. The special case q = 1/2 is symmetric and recovers the aforementioned L 1 norm calibration. For q 6 ¼ 1/2, by construction of Eq (7), the best fit corresponds statistically to q Á 100% of the data points {ln p(t i ), i = 1, 2, . . ., N} to be below the theoretical curve d ln p q ðtÞ and (1 − q) Á 100% of the data points to be above it. Thus, for q > 1/2 (resp. q < 1/2), most of the data points are below (resp. above) the calibrated curve d ln p q ðtÞ, putting it above (resp. below) the median fit.

Methodology, metrics and tests on "S&P 500 1987" bubble
To illustrate the performance of the OLS and quantile regression methods, we test them on the time series of the S&P 500 Composite Index over the time period corresponding to the bubble that burst with the crash in October 1987, hereafter referred to as the "S&P 500 1987" bubble.

LPPLS quantile regression curves for different quantile probability level q
Fig 1 represents a bundle of nine coloured quantile-based calibrated curves obeying expression Eq (4) obtained using the quantile regression method Eq (8) with Eq (7) 10.19. The in-sample standard L 2 -based fitted curve is also shown as the red thick curve, which is extended by the red dashed thick out-of-sample curve. From the three panels, one can see that the quantile curves cover approximately 80% of the variability of the empirical price time series, as they should according to the choice of q spanning from 0.10 to 0.90. The smaller (resp. larger) values of q tend to fit the lowest (resp. highest) part of the time series, providing together fuzzy envelops of the time series that seem quite reasonable visually. Note that these estimated critical times b t c correspond to the times at which the calibrated curves peak. For the Fig 1A and 1B corresponding to t end not to close from the crash, one can observe that, apart from the lowest quantiles that exhibit more variability, the higher values of the quantiles provide consistent fits with estimated values of the critical time b t c close to the true value T c . In contrast, the standard L 2 -based fit tends to overshoot, similarly to the lowest quantiles. The situation reverses for the Fig 1C with t end being very close to the crash, for which most of the quantiles (and the L 2 -based fit) overshoot significantly by about five months, while the lowest curve for q = 0.10 undershoots by approximately two months.
The divergence between the fitted functions obtained for low q's and large q's illustrates the first advantage of quantile regressions for LPPLS signals, that is, to provide a range of possible scenarios that can bracket the true value of T c , given that scanning q provides a family of calibrated functions that are sensitive to different parts of the statistical fluctuations supposed to decorate the theoretical generating process in Eq (4). More generally, one never knows precisely how the noise entangles with the LPPLS signals. Practical scenarios are more challenging in that the data often have unequal variation (a "location-scale model" in statistical terminology) due to the complex interactions between the various factors. This implicitly recognizes that there might be not a single super-exponential rate of change that characterizes changes in the probability distribution of log-price. In such cases, as well as in the presence of model errors (the true generating process is not known and the LPPLS model is only an approximation), quantile regressions provide a useful reading of the influence of the different noise quantile levels on the calibration results. The quantile regression also allows one to explore the heterogeneity of residuals as a function of time and deals with the asymmetric shape of the conditional distribution, which might be missed by OLS regression.
Multi-scale analysis of b t c as a function of q and dt • Region II (False Negatives) corresponds to a failure of the prediction that purports that the bubble has ended ( b t c < t end ), while this is not true (T c > t end ).
• Regions III and IV (True Negatives) represent the case where the bubble has already ended and the calibration correctly diagnoses it.
• Region V (False Positives) is another failure of the prediction, which is opposite to region II. The prediction is that the bubble continues and its critical time b t c is in the future ( b t c > t end ), while it has truly ended (T c < t end ).  Fig 3A, one can observe that the predicted medians and averages starting from t end = May 1987 become stable and close to the true critical date T c = 1987.10.19 (represented by the red dashed horizontal and vertical lines). In contrast, the L 2 estimate is more unstable. At the scale dt = 750 days in Fig 3B, a remnant of the stability observed at the scale dt = 500 days is visible but the prediction is much more noisy.
The first important message of Fig 3 is that, when t end is too far from T c , the estimated b t c is not stable and systematically underestimates the time of the bubble burst. Moreover, b t c is found to move upward proportionally to t end as the later increases. This observation holds for all three estimators (i.e., average, median and OLS fit). The second message is that the difference between the averages and medians shows that the distribution of these estimates is non-normal and skewed.
Quantile-Violin representation q(t c ) − pdf(t c (q)) of the ensemble of quantile regression functions. The results of Fig 3 are far from constituting the whole story since the quantile regressions can give much more than just an average or median tendency. In order to capture the wealth of information of those 99 functions obtained for each t end , we introduce a generalisation of the violin plot [37] and call it "Quantile-Violin plot" (represented by q(t c ) − pdf (t c (q))), in which the standard box plot is complemented by a rotated kernel density plot on its right side, and the corresponding q values are given on the left side.
Specifically, Fig 4 plots the results for the S&P 500 1987 bubble, where the three panels correspond to dt = 500, 750 and 1000 trading days, respectively. Each panel contains seven Quantile-Violin plots associated with the seven t end = 1987.03. 19, 1987.04.30, 1987.06. 25, 1987.08.06, 1987.10.15, 1988 [33]: (i) if the true T c falls within the range encompassed by all forecasts, no more than half of the individual forecasts will be superior to the median forecast; (ii) at worst, if the true T c lies outside the forecast range, the median forecast will be better than 50% of the forecasts.  two key nonlinear LPPLS parameters m and ω has two major effects: (i) the distributions of b t c tend to be more stable as a function of t end and bracket the true T c for all cases, except for the earliest t end = 1987.03.19 at the shortest time scale dt = 500 days; (ii) the spreads of b t c values over the different scenarios are narrower, indicating that the LPPLS quantile regressions provide more precise predictions of the true T c .

dt-Violin representation dt(t c ) − pdf(t c (dt)) of the ensemble of quantile regression functions.
Previous works have shown the importance of a multi-scale analysis (see e.g., [38]). In our case, for a fixed t end , this amounts to scan t start and redo the analysis for each window. Specifically, we shift t start = t end − dt in steps of 5 trading days, obtaining 126 windows of sizes dt = 750, 745, . . ., 125 trading days. For each window [t start , t end ], we perform the OLS estimation and the quantile regression of the model Eq (4) on the same time series already used in Figs 1 and 3-5, obtaining a set f b t c ðq; dtÞjq ¼ 0:01; 0:02:::0:99g. This procedure is summarised in Fig 6. Analogously to Figs 4, 5 and 7 presents a synopsis of the results concerning the estimation of b t c , but now over the population of the 126 windows for the fixed t end and the various q's. We further generalise the violin plot [37] in the form of "dt-Violin plots". The standard box plot is now complemented by a rotated kernel density plot of b t c over the set of 126 windows on its right side for a fixed q, and the corresponding dt values are added on the left side. Specifically,

Applications to the prediction of the end of four historical bubbles
The previous section has studied the S&P 500 1987 bubble in great details. But this is just one case. We now extend our analysis to three additional historical bubbles listed in Table 1 to explore the ensemble behavior of the prediction of their critical end times over the set f b t c ðq; dtÞjdt ¼ 750; 745; :::; 125 trading days} and over 99 quantiles. We refer to these three additional historical bubbles by the names of the involved markets and the years when they burst. The first one is S&P 500 2007, which was studied in [6,8]. The second and third one are SSEC 2007 and SSEC 2009, discussed in details in [35]. For each bubble, we picked one value of t end , spanning from one to three months before the crash that terminated the bubble at T c , as given in Table 1. And Table 2 gives a list of symbols and their individual descriptions.    between 0.50 and 0.80, which slightly overestimates the true T c but is earlier than the L 2 calibration based prediction (blue line). Lower (resp. larger) q's predictions underestimate (resp. overestimate) the true T c . In the case of the SZSC 2009 bubble in Fig 8D, all quantiles give again consistent predictions for b t c , which are however too early by about one month. Its L 2 calibration based prediction is closer to the true T c , while slightly overestimating it.
Summarising the results of these four cases, the quantile regressions are better than the L 2 calibration in two cases, approximately the same in one case and worse in the last case. For these four bubbles, notwithstanding the multi-ray structure of b t c as a function of dt in Figs 7 and 9 thus shows again the unstable behaviour of L 2 calibrations compared with the LPPLS quantile regressions as a function of the window sizes. For more details, the medians and averages of the 99 q's estimates are shown as functions of {dt = 750, 745, . . ., 125 trading days} for the fixed t end given in Table 1, over the population of q values spanning {q = 0.01, 0.02, . . ., 0.99}. Overall, one can observe a quite erratic behavior of b t c for the L 2 calibration in the Fig 9, compared to a much more stable behavior for the quantile regressions. The latter exhibit approximate plateaus of stability of the predicted b t c as a function of dt, which gives confidence in the reliability of the detected LPPLS signal as a function of time scale. This is particularly evident for the S&P 500 2007 bubble in Fig 8B, for which the stable plateau extends almost over the whole range of dt. In contrast, the standard OLS estimation of b t c is sensitive to the chosen size dt of the window, leading to inconclusive diagnostics. Thus, the quantile regressions introduce stability in the forecasts when they are exploited as an ensemble of scenarios.

Consolidated DS LPPLS™ indicators
The previous sections have presented a wealth of measures, summarised through the use of the Quantile-Violin in Fig 4 and dt-Violin plots in Fig 7, which represent the ensemble of predictions for a given present time t end over the set of quantile levels q used in the LPPLS quantile regression, and over the set of time scales (i.e., window sizes) dt used in the calibrations. While informative, the effective use of so many fluctuating and often conflicting signals to inform on the danger for a bubble burst and to trigger an actionable decision remains a challenge. To address this, we propose two indicators that aggregate these signals, inspired from previous works on historic bubbles [6,35,38] via the implementation of pattern recognition of LPPLS structures and filtering, as suggested in Fig 5. These two indicators have been briefly discussed to present the ex-ante forecast of the Chinese bubble and its burst that started in June 2015 [39].
1. The DS LPPLS Confidence™ indicator is the fraction of fitting windows whose calibrations meet the filtering condition 1 in Table 3 (within the JLS framework, the condition that the crash hazard rate h(t) is non-negative by definition [40] translates into the value of Damping larger than or equal to 1). It thus measures the sensitivity of the observed bubble pattern to the 126 time windows of duration from 125 to 750 trading days. A large value indicates that the LPPLS pattern is found at most scales and is thus more reliable. If the value is close to one, the pattern is practically insensitive to the choice of dt. A small value of the indicator signals a possible fragility since it is presented in a few fitting windows.
2. The DS LPPLS Trust™ indicator quantifies the sensitivity of the calibrations to the specific realised instance of the noise in the financial time series. Because the calibration is an attempt to disentangle the LPPLS signal from an unknown realisation of the residuals, we generate bootstrap samples of the original data 100 times and add the residuals to the calibrated LPPLS price that proxy for 100 supposed independent realisations of equivalent price patterns. The DS LPPLS Trust™ indicator is defined as the median level over the 126 time windows of the fraction among the 100 synthetic time series that satisfy the filtering condition 2 in Table 3. It thus measures how closely the theoretical LPPLS model matches the empirical price time series, 0 being a bad and 1 being a perfect match.
3. Arithmetic average and geometric average of the DS LPPLS Confidence™ indicator and DS LPPLS Trust™ indicator: combining these two indicators is instructive to join the two types of information on the time scale over which the LPPLS signal appears and on the quality of the fits.

Empirical analysis of 16 historical bubbles with the consolidated DS LPPLS™ indicators
In order to provide a more extensive test of the LPPLS quantile regression approach, we construct the DS LPPLS Confidence and Trust indicators described in the previous section, for 16 historical bubbles listed in Table 4. These indicators can then be compared with the price time series to allow a judgement of how well they can be associated with bubbles and their terminations. And these bubbles are obtained from the previous studies [1,5,39,41,42] as well as cases reported at the website of the Financial Crisis Observatory at ETH Zurich (www.er.ethz. ch/financial-crisis-observatory.html). The data was obtained from the Thomson Reuters Datastream.  Figs 10-25 present the price time series of the 16 historical bubbles together with the DS LPPLS Confidence and Trust indicators constructed using (i) the L 2 fitting method (green curves) and (ii) the quantile regressions (red curves). Since the Confidence and Trust indicators can be constructed for each quantile level q, we choose to present them for their arithmetic over the 9 deciles {q = 0.10, 0.20, . . ., 0.90}. (In detail, for q = 0.10 as well as for their arithmetic and geometric averages are shown in S1-S16 Figs).
For the S&P 500 1987 bubble in Fig 10 and the S&P 500 2007 bubble in Fig 11, one can observe that the quantile regressions add to the L 2 fitting method by providing in general earlier warning signals, in particular using the lower quantiles q = 0.10 in S1 Fig. For the DJIA 1929 bubble in Fig 12, the quantile regressions provide a neat warning right on target, i.e. just before the crash. Such warning is absent in the L 2 fitting method.
For the Nasdaq Composite Index 2000 bubble shown in Fig 13, the performances of the quantile regression and the L 2 fitting method are similar (a detailed account of the dot-com bubble that crashed in 2000 can be found in Ref. [41]).
For the Chile 1991 and 1994 bubbles shown in Fig 14, one can observe negative values of the indicators that diagnose "negative" bubbles [6,13], whose end corresponds to a "negative crash" (i.e., a rally or rebound). One can observe that the quantile regressions provide two additional important warning (end of bullish regime in 1994 and rebound in 1998) that are missing in the standard OLS method. Fig 15 presents the identification of a strong negative bubble and its rebound for the Venezuela 1997 bubble, both by the L 2 fitting method and the quantile regression method that perform similarly. However, the latter provides early warnings of the end of the large preceeding peak, which are absent in the L 2 fitting method.
For the Indonesia 1994/1997 bubble shown in Fig 16, the positive bubbles followed by crashes in 1994 and 1997 are correctly identified by both methods. But again, the quantile regressions provide two negative bubble signals that correctly pinpoint rebounds, which are missed by the L 2 fitting method.  The Malaysia 1994 bubble shown in Fig 17 exhibits a remarkably clean LPPLS pattern, so that all indicators target precisely the peak and subsequent burst. We observe the same joint performance for the Thailand 1994 bubble shown in Fig 18. However, the quantile regressions provide warnings of a large secondary peak after the burst of the first large bubble, which is missed by the L 2 fitting method.
For the Hong Kong market shown in Figs 19 and 20 (see Ref. [7] for a discussion of the set of bubbles and crashes that have punctuated this market again and again), we observe that the L 2 fitting method and quantile regressions provide similar indicators. The same conclusion applies to the price time series of sugar shown in Fig 21, to the Brent Oil bubbles (see Ref. [43] for the analysis of the 2008 bubble) shown in Fig 22 and to the SSEC Chinese index shown in Fig 23 (see Ref. [35] for an early account).
For the SZEC Chinese market shown in Fig 24, the quantile regressions over-perform the L 2 fitting method by identifying precisely the large rebound that occurred in the third quarter of 2008, while the L 2 fitting method completely misses it. Concerning the SSEC 2015 bubble shown in Fig 25, the main difference between the indicators provided by the quantile regressions compared with the L 2 fitting method is that the former provides earlier warnings of the peak of the bubble that occurred in June 2015 as well as signatures of a previous large peak and correction in early 2015. We refer to Ref. [39] for a description of the real-time analysis of the development of the indicators that were used to predict the burst.
Overall, the DS LPPLS Confidence and Trust indicators are found to have strong diagnostic power to identify the market regimes during which prices tend to accelerate upward (resp. downward) and which are followed by strong corrections (resp. rallies). This conclusion holds both for the L 2 fitting method and the quantile regressions. In addition, one can observe a larger sensitivity of the quantile regressions for the detection of negative bubbles and the subsequent rebounds.

Concluding remarks
This study has shown that positive (resp. negative) bubbles followed by large crashes/corrections (resp. rallies) can be identified by diagnosing the existence of log-periodic power law singular (LPPLS) structures in the log-price dynamics. Given the stochastic nature of log-prices, significant variability in the estimatations and in the predictions is unavoidable. The analysis of their stability and sensitivity with respect to t end , q and time scale dt is very helpful. We have provided evidence that financial markets exhibit a degree of inefficiency and a potential for predictability, especially during regimes when bubbles develop.
The innovation of the present article includes: (1) the introduction of the quantile regression applied to the LPPLS detection problem, and the comparison with the L 2 -based calibration method; (2) the combination of the many quantile regressions with a multi-scale analysis and presentations of the Quantile-Violin and dt-Violin plots; (3) the implementation of the DS LPPLS Confidence and Trust indicators through resampling and filtering that finally provides an aggregation and consolidation of the wealth of signals generated at multi-scales and many quantile levels; (4) the detailed analysis of the S&P 500 1987 bubble and the application of the methodology to a total of 16 empirical financial time series exhibiting each at least one massive bubble.
These innovations have the ultimate goal of becoming part of an early warning system that could be run by a central bank, say, to inform it towards appropriate counter measures of impending critical transitions [16,[44][45][46][47][48]. Although the next step of constructing an explicit early warning system is not investigated here [49], the introduction of our new metrics and methodology to develop real world scenarios could provide useful precursors to incorporate in an early warning system.
Overall, the results demonstrate that the quantile regression of LPPLS signals contributes useful early warning signals and the systemic indicators exhibit significant predictive ability around the real critical time when the burst/rally occurs. We also found that the quantile regression method improves on the L 2 based calibration method by providing richer and more stable scenarios. Quantile regression especially focuses on estimating multiple super-exponential rates of change in the quantiles of the distributions of log-price conditional at t end with different time scales dt. It thus presents many new possibilities for the statistical analysis and interpretation of observational data. With the implementation of the systematic indicators, the hybrid form of ensemble forecasting provides a new benchmark on early warning signals of financial crises. From a broader scientific and societal perspective, our article supports a reorientation toward ensemble forecasts based on extracting multi-dimensional information from the noisy signal at multiple scales.