Figures
Abstract
Financial stock return correlations have been analyzed through the lens of random matrix theory to differentiate the underlying signal from spurious correlations. The continuous spectrum of the eigenvalue distribution derived from the stock return correlation matrix typically aligns with a rescaled Marchenko-Pastur distribution, indicating no detectable signal. In this study, we introduce a stochastic field theory model to establish a detection threshold for signals present in the limit where the eigenvalues are within the continuous spectrum, which itself closely resembles that of a random matrix where standard methods such as principal component analysis fail to infer a signal. We then apply our method to Standard & Poor’s 500 financial stocks’ return correlations, detecting the presence of a signal in the largest eigenvalues within the continuous spectrum.
Citation: Achitouv I, Lahoche V, Samary DO (2025) Signal inference in financial stock return correlations through phase-ordering kinetics in the quenched regime. PLoS One 20(10): e0334436. https://doi.org/10.1371/journal.pone.0334436
Editor: Pablo Martin Rodriguez, Federal University of Pernambuco: Universidade Federal de Pernambuco, BRAZIL
Received: October 14, 2024; Accepted: September 27, 2025; Published: October 31, 2025
Copyright: © 2025 Achitouv et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data underlying the results presented in the study are publicly available from the GitHub repository (https://github.com/Eleo22/RN-Finance).
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Statistical field theory can be seen as a clever example of statistical inference [1], aiming to reproduce large-scale correlations in the collective behavior of a significant number of strongly interacting degrees of freedom. This perspective is motivated by information theory, with the famous example being the so-called field theory, which effectively describes the behavior of the D-dimensional Ising model near the ferromagnetic transition [2–4]. In this context, the effective field
represents the average of a large number of discrete spins at a scale where the precise interactions between them are completely blurred. More generally, equilibrium statistical physics can also be viewed as an instance of statistical inference, based on the maximum entropy distribution [5]. The core problem of statistical physics essentially involves extracting relevant features from a large set of particles that interact with each other. For example, the theories of ideal gases and the Navier-Stokes equations simplify the complexities of a gas or fluid composed of a vast number of particles into straightforward relationships between a small set of macroscopic parameters, such as pressure, temperature, density, or entropy. This overarching objective is analogous to data analysis, where the framework of large data sets presents a concrete example of a problem closely related to statistical physics. Standard methods for addressing this issue, such as principal component analysis (PCA), are specifically designed to extract the degrees of freedom that dominate the correlation spectrum of a data set (see, for instance, [6,7] and references therein). However, PCA requires a clear separation between the “relevant” degrees of freedom and those that can be disregarded (the “noise”). This condition fails in cases of nearly continuous spectra, where PCA cannot establish a clear boundary between the degrees of freedom see [8,9] and Fig 1. In this figure, we present two typical empirical spectra encountered in data analysis. On the left, the signal is distinct from the bulk, allowing standard PCA to isolate it effectively. On the right, however, the continuity of the spectra renders PCA ineffective at distinguishing between degrees of freedom. Here, “continuity” means that the spacing between eigenvalues within a connected component is typically of order 1/N, where N is the size of the empirical correlation matrix (ECM), which is positive definite. Let us now discuss the general questions arise in the signal detection apply to financial stock. Signal detection in financial markets refers to the process of identifying meaningful patterns or anomalies in market data that can inform trading decisions. These signals often originate from price movements, volume fluctuations, volatility, news sentiment, or macroeconomic indicators. The goal is to distinguish between “true” signals that indicate future price action and “noise” that may result from random fluctuations. Signal detection theory, originally developed in psychophysics, has been adapted to financial modeling to address uncertainty in noisy environments [10]. In finance, this theory is used to model the trade-off between detecting a real trading opportunity (true positive) and avoiding false signals (false positives). One of the most common signal detection methods in financial markets is technical analysis, which includes indicators such as Moving Averages, Relative Strength Index, Bollinger Bands [11]. These indicators help traders spot trends, momentum changes, or overbought/oversold conditions. Machine learning models, including Support Vector Machines, Random Forests, and deep learning, have increasingly been used to detect complex nonlinear patterns in high-dimensional financial data [12]. Statistical models, such as Autoregressive Integrated Moving Average, Generalized Autoregressive Conditional Heteroskedasticity, and Kalman filters, are employed to extract signals from time series data by modeling dependencies and volatility [13,14]. Natural Language Processing techniques are used to detect sentiment from financial news or social media, providing early signals about market direction [15]. These textual signals are often used in combination with quantitative models for improved accuracy. Key challenges in signal detection include the risk of overfitting, data snooping bias, and non-stationarity in financial time series.
For nearly continuous spectra (right), the position of the cut-off Λ is more difficult to understand.
Recently, a series of papers have approached the problem of spectral tails through statistical inference, based on an unconventional local Euclidean field theory [9,16–20]. In [9], the authors proposed to use an effective field theory model capable of addressing the full spectrum. This model resembles an equilibrium non-local field theory, characterized by a specific O(N) invariance. In this article, we considers a non-equilibrium version of this field theory and we explore the link between the presence of a signal and the ability of the out-of-equilibrium field to reach equilibrium over a long time. For this analysis, we focus on financial data, specifically the correlation of returns in the Standard & Poor’s (S&P) 500 index. Indeed, it has been shown that the distribution of the eigenvalues is approximately captured by a random matrix spectrum, except for a few spikes corresponding to the “market” mode or collective modes (e.g., [21–23]).
The manuscript is organized as follows: In Sect 2, we define the model inspired by [24] and provide the theoretical frameworks used throughout the paper. In Sect 3, we construct the formal solution of the Langevin equation in the quenched regime for a real vector of size N, where disorder is represented by a Wigner matrix. We specifically investigate the self-consistent evolution equation for the effective potential arising from the self-averaging of the square length. In Sect 4, we apply the previous formalism to study financial markets, focusing on the S&P 500 while varying the signal-to-noise ratio by perturbing correlations with an appropriate Brownian motion. Finally, conclusions and perspectives are summarized in Sect 5. Appendices A and B provide additional material. The data and codes used in this work can be accessed at https://github.com/Eleo22/RN-Finance.
2 The model
The essence of this section is based on the work presented in [9], where we constructed a maximum entropy estimate (the least structured one) for the empirical probability distribution of the microscopic degrees of freedom underlying a given ECM spectrum. This inferred distribution resembles a discrete version of the standard Ising model and is described by the partition function:
where:
The bare kinetic operator is such that the (nonperturbative) inference condition holds:
where Cij is the (i,j) entry of the ECM. At the zero order in the perturbation theory, we have , but quantum corrections arises and broke this property to higher orders. The formal relation between them is given by the so-called Dyson equation:
where the matrix is the self energy. We assume that C is closed enough to a positive Gaussian noise, or more precisely that eigenvectors are non-localized enough to remains close to the Marchenko-Pastur (MP) law (see Appendix A). In the perturbative regime, it is suitable to assume that
inherits from these properties. Then, denoting by
the eigenvector of
such that:
where . We expect that the distribution for components
is close enough to the Porter-Thomas distribution, see Appendix A. It is suitable to work in the eigenspace rather than in the “real space”, and we introduce the field
such that the classical action S becomes:
where the overlap tensor is:
Because of the non-localized structure of the eigenvecteurs, the relevant values for are for
, when the indices are equal in pairs, as can be easily checked numerically, for instance, with a Gaussian random matrix [9]. Furthermore, the case where all the indices are equal provides sub-leading quantum corrections and can be removed without any ambiguity (They cannot be perturbatively generated from the second kind of combinations in the large N limit; see [9] again.). Hence, we can keep only configurations where the indices are equal in pairs
where , is the length square of the field. This observation generalizes for higher interactions which express in terms of O(N) invariants. Note that in this approximation, it is shown that the self energy is the solution of a closed equation in the large N limit, which can be solved exactly. The solution is diagonal (i.e. independent from the “momentum”
in the diagonal basis). Then, in this limit,
are also eigenvectors for C−1, and quantum corrections are all contained in the effective mass – see [9] for more details, and the above property is indeed a well known property of O(N) models [25]. Denoting as
the eigenvalues for C, and by x + and x− respectively largest and smallest eigenvalues nearly continuous component of its spectra,
is defined as:
In this work, in line with the analysis presented in [20], we intend to adopt a different perspective, questioning the relationship between the presence of a signal and the stability of the maximum entropy distribution, rather than conjecturing it. To this end, and following [20], we will consider an out-of-equilibrium process described by a Langevin-like equation that models the motion of a classical particle in an N-dimensional random energy landscape. Formally, in the eigenspace, this equation reads:
where is the projection along the eigenvector
and
is a Gaussian white noise with zero means and 2-point correlation function:
Such a kind of equation is considered for instance for p-spin models [26,27], with the difference that is here the inverse eigenvalues of
. Hence, choosing for
,
the equilibrium probability distribution for q is [3]:
This distribution matches the O(N) equilibrium theory considered above. Note that it is reasonable to assume that the spectrum for start to 0, corresponding to the “mass” h0. Thus,
, and we denote the corresponding empirical distribution by
. Furthermore, we assume that the equilibrium statement regarding the large N limit holds and that
is also given by the (suitably shifted) spectrum of the ECM.
In the next section, we investigate analytical solutions of this Langevin like equation using random matrix theory, and especially statement about large N Wishart matrices.
3 Analysis in the quenched regime
In this section, we construct the analytic solution of equation (10) by following the general method outlined in [24], drawing inspiration from Bray’s solution for phase-ordering kinetics [28] and the Cugliandolo-Dean solution for the p = 2 spherical spin glass [26]. The central assumption for deriving this solution is the self-averaging property of a(t), which decouples from in the equation of motion (10) (quenched regime). We expect this hypothesis to be sufficiently realistic in the continuous limit, as
, which is the limit of primary interest to us. The formal solution is then given, assuming once again
:
where:
Note that for large N, eigenvalues λ are assumed to display accordingly with the Wigner semi-circle law with variance i.e.:
Also, depends only on the numerical value
of the corresponding eigenvalue i.e.
. In the continuous regime, the square averaging a(t) can be expressed in terms of the empirical distribution
:
Using the solution (14) for , the previous equation leads to a formally closed equation for a(t):
where G(t): = e2g(t),
and where F(t) looks as the convolution product of the above function G(t) and H(t) namely:
Remark. In the general case, the spectrum of the correlation matrix exhibits a bulk, quasi-continuous and a series of connected components (sometimes reduced to a single eigenvalue, as is the case for the largest eigenvalue). Our study focuses on the bulk, i.e. the continuous part of the spectrum. We will therefore impose a cut-off for a certain eigenvalue at the level of the correlation spectrum, thus retaining only a number
of eigenvalues. The previous formulas thus become in practice:
For a purely uncorrelated distribution well described by the Marchenko-Pastur (MP) distribution (see Appendix A), the function H(t) can be computed exactly. For variance and setting
, we find:
Accordingly with [24], one obtain a more tractable equation from the observation that long time physics must be dominated by configurations such that:
This simply means that trajectories must be trapped by the minimum of the potential, and for h0<0, it is for the non-vanishing value : , and from (18), we get:
This equation can be formally solved using the Laplace transform, which is defined for sufficiently well-behaved functions f(t) as:
We arrive to:
Where can be analytically computed for MP law, and is a decreasing function with respect to p. For q = 1, we get:
Then, is minimal for p = 0, and because
is positive definite, we must have (As pointed out in [24], the estimation is pessimistic):
and we get for MP. Furthermore,
and
. Hence, because of the standard results about the asymptotic expression of inverse Laplace transforms near the origin (see B), we get
. Furthermore, the late time 2-points correlation K(t) defined as:
behaves as for MP below the critical temperature. This means that the memory of the initial condition is long i.e. it has infinite exponential time life. Finally, let us notice that the method break-down in the high temperature regime, and the origin of this failure can be traced from the behavior of G(t), which in that regime diverges exponentially. In other words, above Tc the system is expected to relax toward equilibrium accordingly with an exponential law, and the memory of the initial condition has a finite time life.
Let us remark that the same result maybe obtained by the statement that the correlator G(t) admits the low temperature expansion
assumed to have a finite radius of convergence, which we identify with the critical temperature below. The functions G(n)(t) can be constructed recursively from the closed equation (18). It is straightforward to check that functions {G(n)(t)} satisfy the following recurrence relations:
Hence, the Laplace transform of G(t) reads:
Because is a decreasing function of p, the radius of convergence R is fixed by setting p = 0, and the series formally resumes as for
.
4 Signal detection threshold for financial markets
Before examining how this topic relates to our context, let us first briefly review signal detection as described in reference [9]. PCA works well when the covariance matrix spectrum has a few dominant eigenvalues (“spikes”) that clearly separate signal from noise. This is evident when a small number of eigenvalues capture most of the variance, seen as a gap in the cumulative fraction. In many real datasets, however, the spectrum is nearly continuous, there are many relevant features spread over a wide range of eigenvalues without clear gaps. This makes it hard for PCA to cleanly separate signal from noise, as noisy and relevant directions mix strongly. This difficulty is similar to what happens in critical phenomena in statistical mechanics, where scales do not decouple cleanly. RG is a powerful method for addressing this by coarse-graining degrees of freedom hierarchically. Standard random matrix-based noise models (e.g., Marčenko–Pastur law) help, but have limitations: They need detailed noise modeling, they must handle data sparsity, they can’t fully separate strongly mixed relevant and noisy directions in nearly continuous spectra.
We then uses scalar field theory as an analogy. There, RG flow shows how interactions become relevant or irrelevant based on the energy spectrum’s shape and the space’s dimension [8]. Similarly, for data with a nearly continuous spectrum, one could think of describing correlations through an effective field theory that captures interactions among degrees of freedom. Instead of focusing on identifying isolated signals in the spectrum, the idea is to model the full structure using a field theory framework, treating the density of eigenvalues as analogous to a particle energy spectrum. This provides a universal description of interactions in the data, potentially overcoming PCA’s limitations for continuous spectra. Note that this idea connects to previous work where local interactions in momentum space were modeled. Reference [9] extends the approach by considering non-local effects and includes more numerical results and applications. As result, it is possible to understand signal detection by the significant changes on the universal properties of noise models, in particular for the number of relevant couplings by which asymptotic states in the IR are distinguished. That is reminiscent of the physics of critical phenomena, and makes it possible to consider signal detection as a phase transition, breaking the native symmetry of models based on a principle of maximum entropy. Moreover, the RG allows a natural understanding of the existence of a detection threshold due to the existence of a compact subset of physically acceptable initial conditions, included in the symmetric phase.
In the following analysis, we consider stocks from the S&P 500 index over the period from January 1, 2019, to January 1, 2024, using data downloaded from Yahoo Finance. We exclude stocks that were not present for the entire time range, leaving us with 485 stocks and 1,258 days of closing prices for each stock.
Instead of examining the system’s behavior under a single spectrum, we aim to compare different regimes corresponding to varying signal-to-noise ratios. Our general approach will involve constructing an interpolation between two extreme regimes: one that is highly correlated and another that is completely uncorrelated (spurious noise, which can be well-described by random matrix theory).
4.1 A model to vary the signal-to-noise ratio in the correlation of stock returns
We consider the Geometric Brownian Motion (GBM) model, which is a widely used stochastic process in finance for modeling asset prices. It assumes that stock prices follow a log-normal distribution. In its integral form, GBM describes the evolution of stock prices over time and is given by:
where S0 is the initial stock price, μ is the drift coefficient representing the expected return of the stock, σ is the volatility coefficient representing the standard deviation of the stock’s returns, and dWt is a Wiener process (or Brownian motion) (e.g., [29]), representing the random component of stock price changes.
In this model, there is no correlation between the generated stock prices, which is not the case in actual financial stocks. For each S&P 500 stock, we construct a simulated walk with varying degrees of ‘noise’ or ‘temperature’ as:
where St is the actual stock price. The GBM stock price is computed using S0, μ, and σ, which are computed from the historical data. In contrast, we generate fully correlated walks by using the same random seed in equation 33, which we refer to as , and apply the same weighting scheme:
In the left panel of Fig 2, we show the correlation matrix of the stock price log-returns:
where , and
is the standard deviation of the stock i, computed over the period under consideration. The different figures represent various values of
. The right panels show the distribution of the eigenvalues for these correlation matrices, where we display two fits: the Marčenko-Pastur (MP) distribution and the “rescaled” MP distribution, which subtracts the contribution of the larger eigenvalues [21]. We also mark the position of the cut-off value
(dashed vertical red line), which defines the threshold of the continuous spectrum (eigenvalues to the left of
are within the continuous spectrum). For
, we observe that the MP distribution is recovered, indicating a purely uncorrelated matrix.
4.2 Validation of the quenched average hypothesis
We begin by computing numerically a(t) (Eq 12) to test whether, at low temperatures, the averaged trajectories over all eigenvalues of the continuous spectrum in Eq 10 relax to the local minimum of the potential, a0. This confirms that the system exhibits self-averaging at large times, which supports the quenched average hypothesis. On the other hand, at high temperatures, we expect that trajectories are not confined to the local minimum as they do not self average. Instead they keep a momentum around the minimum of the local potential, leading to a confinement beeing proportional to the deep of the potential (). The left panels of Fig 3 show the behavior of a(t) below the critical temperature, as calculated from Eq 28 and for
corresponding to the top and lower panels. For different values of β (signal) we compute a(t) for five different realizations of noise (
in Eq 10). In contrast, the right panels illustrate the high-temperature regime, where the system’s energy exceeds the potential well, causing trajectories to oscillate around a higher equilibrium point. Note that for negative β of sufficiently large magnitude, the trajectory diverges in the low-temperature regime, indicating that the self-averaging assumption breaks down. In both cases, we recover the expected behavior, with variations in the local minimum potential (a0).
4.3 Infering signal in the continious spectrum
We now turn to infering any signal form the continius spectrum by considering the correlation:
The brackets represent the ensemble average over the Brownian noise realizations, . We focus specifically on the case where t0 = 0, and due to the chosen initial conditions, the correlation function essentially reduces to the average of the trajectories. Fig 4 shows the short-term behavior of some “eigen-trajectories”,
, for
(
corresponds to the larger eigenvalues in the continuous spectrum of the correlation matrix) and for different degree of noise
,
corresponds to the most correlated trajectories,
corresponds to pure uncorrelated trajectories and
to the true financial stocks behaviours.
In the low-temperature regime (left panels), we observe a qualitative difference in behavior for positive versus negative values of β, as well as for small versus large eigenvalues: close to the edge at , the trajectories decay more slowly than a power law for positive values of β, and exponentially for sufficiently large negative values of β. Further from the edge (i.e., for small eigenvalues of the ECM of the returns corresponding to
), the trajectories always decay more slowly than a power law. Note that at high temperatures (right figure), the system’s behavior remains essentially the same as the low temperature, except for additional fluctuations.
To quantify the different regimes of signal-to-noise ratios, we show in Fig 5 the behavior of the exponents α and γ, defined by fitting , for different eigenvalues (labelled as μ) and various values of β. The blue, red curves correspond to
respectively. We observe that both the fitted values of α and γ are consistent accross the different values of μ but with values that are orders of magnitude different depending on the μ. This result also holds in the high-temperature regime, though numerical instabilities are more pronounced than in the low-temperature regime. Note that the discontinuity in the function and its derivative at
is expected from the model—see equations (35) and (34).
These observations indicate that the underlying kinetics for the original data spectrum at is qualitatively different from the kinetics of a purely Gaussian signal. These conclusions appear to contradict the observations made in [21,30] and suggest that the degrees of freedom near the cut-off (
) imposed by the continuity criterion from the bulk are informative.
Another indicator supporting this conclusion comes from comparing the concavity of the correlation function for different values of μ at short time intervals (the second derivative of with respect to time). In Fig 6, we show the behavior of
(over 1000 realizations) for
(left panels) and
(right panels) in the low-temperature regime, where relaxation toward equilibrium is not expected for a purely random correlation matrix (
).
On the bottom: Behavior of the second derivative in both cases.
Once again, we demonstrate that for large eigenvalues , the corresponding correlation functions have a second derivative greater than 1, indicating the presence of a signal. In contrast, small eigenvalues do not exhibit this behavior.
5 Conclusion
In this paper, we applied random matrix theory and large N methods [3] to construct and analyze formal solutions for a stochastic system governed by the correlation matrix of the financial stock returns. Our model is presented as a non-equilibrium version of a statistical inference model, capturing the correlation structure in the spectrum discussed in [9,20]. The underlying field theory is non-local due to the delocalized nature of the eigenvectors in a regime well described by random matrix theory.
Following [28], the equation of motion (Eq 10) can be formally solved in the quenched regime, which is justified a priori as long as the correlation matrix is well-described by random matrix theory (i.e., when N is large enough and eigenvectors are delocalized). In this regime, the O(N) invariant self-average, and the initial condition for all μ is equivalent to choosing the coordinates qi(0) randomly. The solution constructed in this way, and numerically validated, reveals the existence of a critical temperature below which the system never reaches equilibrium (i.e., infinite correlation time), consistent with spin glass dynamics theory [24,26].
Our objective was to study the impact of localized degrees of freedom in the spectrum on the system’s temporal behavior. To achieve this, we used real data from the S&P 500, which we corrupted with Gaussian noise via a Wiener process and interpolated between a perfectly correlated regime and a completely random one.
Our main result shows that the evolution of the component for the largest eigenvalues of the correlation spectrum exhibits different behavior than the smaller ones, depending on the level of correlation in the data. For a purely Gaussian correlation matrix, the relaxation time is very large, in agreement with the analytical predictions-see equation (14). However, as the level of correlation increases, the correlation time decreases by one or two orders of magnitude. The behavior of the component corresponding to the largest eigenvalues in the original correlation matrix at
is different from the one of a purely Gaussian matrix, pointing toward a signal detection. Indeed, we find that the underlying kinetics at the tail of the spectrum significantly diverges from what is expected for large Wishart random matrices.
Appendix A: Marchenko-Pastur theorem
In this section we recall the standard statement in random matrix theory known as Marchenko-Pastur (MP) theorem [29]:
Theorem. Let X some random matrix with i.i.d entries and variance
. As
keeping the ratio
fixed, the empirical eigenvalues distribution of the corresponding
random Wishart matrix Z: = XXT/T converges weakly toward the MP distribution:
where .
Fig 7 provides the graphical illustration of the MP theorem. Any (normalized) eigenvector of the random matrix for some eigenvalue λ is non-localized i.e. uniformly distributed on the N–1 dimensional sphere of radius 1, and the specific distribution of components
can be constructed as the maximum entropy distribution compatible with the constraint
, the so-called Porter-Thomas distribution:
Blue curve is the MP law for q = 2 and .
Appendix B: Inverse Laplace transform
We provide here a useful theorem about inverse Laplace transform, whose proof can be found in [31]:
Theorem. Let f(t) be a locally integrable function on such that
as
where rm<0. If the Mellin transformation of this function is defined and if no
then the Laplace transformation of f(t) is
where is the Mellin transform of the function f(t).
References
- 1. Jaynes ET. Information theory and statistical mechanics. Phys Rev. 1957;106(4):620–30.
- 2. Balog I, Ran¸con A, Delamotte B. Critical probability distributions of the order parameter from the functional renormalization group. Physical Review Letters. 2022;129(21):210602.
- 3.
Zinn-Justin J. Quantum field theory and critical phenomena. Oxford University Press; 2021.
- 4.
Zinn-Justin J. From random walks to random matrices. Oxford University Press; 2019.
- 5. Jaynes ET. Information theory and statistical mechanics. Phys Rev. 1957;106(4):620–30.
- 6. Shlens J. A tutorial on principal component analysis. arXiv preprint 2014.
- 7. Veraart J, Novikov DS, Christiaens D, Ades-Aron B, Sijbers J, Fieremans E. Denoising of diffusion MRI using random matrix theory. Neuroimage. 2016;142:394–406. pmid:27523449
- 8. Bradde S, Bialek W. Pca meets rg. Journal of Statistical Physics. 2017;167:462–75.
- 9.
Lahoche V, Samary DO, Tamaazousti M, Functional renormalization group approach for signal detection. arXiv preprint 2022. arXiv:2201.04250
- 10.
Green SJA, D M. Signal detection theory and psychophysics. Wiley; 1966.
- 11.
Murphy JJ. Technical analysis of the financial markets. 1999.
- 12. Atsalakis GS, Valavanis KP. Surveying stock market forecasting techniques – Part II: soft computing methods. Expert Systems with Applications. 2009;36(3):5932–41.
- 13. Engle RF. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom Inflation. Econometrica. 1982;50(4):987.
- 14.
Hamilton JD. Time series analysis. Princeton University Press; 1994.
- 15. Tetlock PC. Giving content to investor sentiment: the role of media in the stock market. The Journal of Finance. 2007;62(3):1139–68.
- 16. Lahoche V, Samary DO, Tamaazousti M. Generalized scale behavior and renormalization group for data analysis. J Stat Mech. 2022;2022(3):033101.
- 17. Lahoche V, Ousmane Samary D, Tamaazousti M. Field theoretical approach for signal detection in nearly continuous positive spectra I: matricial data. Entropy (Basel). 2021;23(9):1132. pmid:34573756
- 18. Lahoche V, Ouerfelli M, Samary DO, Tamaazousti M. Field theoretical approach for signal detection in nearly continuous positive spectra II: tensorial data. Entropy (Basel). 2021;23(7):795. pmid:34201501
- 19. Lahoche V, Ousmane Samary D, Tamaazousti M. Signal detection in nearly continuous spectra and Z2-symmetry breaking. Symmetry. 2022;14(3):486.
- 20.
Erbin H, Finotello R, Kpera BW, Lahoche V, Ousmane Samary D. Functional renormalization group for signal detection and stochastic ergodicity breaking. arXiv preprint 2023. https://arxiv.org/abs/2310.07499
- 21. Laloux L, Cizeau P, Bouchaud J-P, Potters M. Noise dressing of financial correlation matrices. Phys Rev Lett. 1999;83(7):1467–70.
- 22. Plerou V, Gopikrishnan P, Rosenow B, Nunes Amaral LA, Stanley HE. Universal and nonuniversal properties of cross correlations in financial time series. Phys Rev Lett. 1999;83(7):1471–4.
- 23.
Achitouv I. Inferring financial stock returns correlation from complex network analysis. 2024.
- 24. Lahoche V, Ousmane Samary D. Low-temperature dynamics for confined $p=2$ soft spin in the quenched regime. Eur Phys J Plus. 2023;138(5).
- 25. Moshe M, Zinn-Justin J. Quantum field theory in the large N limit: a review. Physics Reports. 2003;385(3–6):69–228.
- 26. Cugliandolo LF, Dean DS. Full dynamical solution for a spherical spin-glass model. J Phys A: Math Gen. 1995;28(15):4213–34.
- 27. Castellani T, Cavagna A. Spin-glass theory for pedestrians. Journal of Statistical Mechanics: Theory and Experiment. 2005;2005(05): P05012.
- 28. Bray AJ. Theory of phase-ordering kinetics. Advances in Physics. 2002;51(2):481–587.
- 29.
Potters M, Bouchaud J-P. A first course in random matrix theory: for physicists, engineers and data scientists. Cambridge University Press; 2020.
- 30. Laloux L, Cizeau P, Potters M, Bouchaud J-P. Random matrix theory and financial correlations. Int J Theor Appl Finan. 2000;03(03):391–7.
- 31. Handelsman RA, Lew JS. Asymptotic expansion of laplace transforms near the origin. SIAM J Math Anal. 1970;1(1):118–30.