Figures
Abstract
The present study addresses the problem of sequential least square multidimensional linear regression, particularly in the case of a data stream, using a stochastic approximation process. To avoid the phenomenon of numerical explosion which can be encountered and to reduce the computing time in order to take into account a maximum of arriving data, we propose using a process with online standardized data instead of raw data and the use of several observations per step or all observations until the current step. Herein, we define and study the almost sure convergence of three processes with online standardized data: a classical process with a variable step-size and use of a varying number of observations per step, an averaged process with a constant step-size and use of a varying number of observations per step, and a process with a variable or constant step-size and use of all observations until the current step. Their convergence is obtained under more general assumptions than classical ones. These processes are compared to classical processes on 11 datasets for a fixed total number of observations used and thereafter for a fixed processing time. Analyses indicate that the third-defined process typically yields the best results.
Citation: Duarte K, Monnez J-M, Albuisson E (2018) Sequential linear regression with online standardized data. PLoS ONE 13(1): e0191186. https://doi.org/10.1371/journal.pone.0191186
Editor: Chenping Hou, National University of Defense Technology, CHINA
Received: April 1, 2017; Accepted: December 31, 2017; Published: January 18, 2018
Copyright: © 2018 Duarte et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All datasets used in our experiments except those derived from EPHESUS study are available online and links to download these data appear in Table 2 of our article. Due to legal restrictions, data from EPHESUS study are only available upon request. Interested researchers may request access to data upon approval from the EPHESUS Executive Steering Committee of the study. This committee can be reached through Pr Faiez Zannad (f.zannad@chu-nancy.fr) who is member of this board.
Funding: This work is supported by a public grant overseen by the French National Research Agency (ANR) as part of the second “Investissements d’Avenir” programme (reference: ANR-15-RHU-0004). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
In the present analysis, A′ denotes the transposed matrix of A while the abbreviation “a.s.” signifies almost surely.
Let R = (R1,…,Rp) and S = (S1,…,Sq) be random vectors in and
respectively. Considering the least square multidimensional linear regression of S with respect to R: the (p, q) matrix θ and the (q, 1) matrix η are estimated such that E[‖S − θ′ R − η‖2] is minimal.
Denote the covariance matrices
If we assume B is positive definite, i.e. there is no affine relation between the components of R, then
Note that, R1 denoting the random vector in such that
, θ1 the (p + 1, q) matrix such that
,
and F1 = E[R1 S′], we obtain
.
In order to estimate θ (or θ1), a stochastic approximation process (Xn) in (or
) is recursively defined such that
where (an) is a sequence of positive real numbers, eventually constant, called step-sizes (or gains). Matrices Bn and Fn have the same dimensions as B and F, respectively. The convergence of (Xn) towards θ is studied under appropriate definitions and assumptions on Bn and Fn.
Suppose that ((R1n, Sn), n ≥ 1) is an i.i.d. sample of (R1, S). In the case where q = 1, and
, several studies have been devoted to this stochastic gradient process (see for example Monnez [1], Ljung [2] and references hereafter). In order to accelerate general stochastic approximation procedures, Polyak [3] and Polyak and Juditsky [4] introduced the averaging technique. In the case of linear regression, Györfi and Walk [5] studied an averaged stochastic approximation process with a constant step-size. With the same type of process, Bach and Moulines [6] proved that the optimal convergence rate is achieved without strong convexity assumption on the loss function.
However, this type of process may be subject to the risk of numerical explosion when components of R or S exhibit great variances and may have very high values. For datasets used as test sets by Bach and Moulines [6], all sample points whose norm of R is fivefold greater than the average norm are removed. Moreover, generally only one observation of (R, S) is introduced at each step of the process. This may be not convenient for a large amount of data generated by a data stream for example.
Two modifications of this type of process are thus proposed in this article.
The first change in order to avoid numerical explosion is the use of standardized, i.e. of zero mean and unit variance, components of R and S. In fact, the expectation and the variance of the components are usually unknown and will be estimated online.
The parameter θ can be computed from the standardized components as follows. Let σj the standard deviation of Rj for j = 1,…,p and the standard deviation of Sk for k = 1,…,q. Define the following matrices
Let Sc = Γ1(S − E[S]) and Rc = Γ(R − E[R]). The least square linear regression of Sc with respect to Rc is achieved by estimating the (p, q) matrix θc such that is minimal. Then θc = Γ−1(B−1 F)Γ1 ⇔ θ = B−1 F = Γθc(Γ1)−1.
The second change is to use, at each step of the process, several observations of (R, S) or an estimation of B and F computed recursively from all observations until the current step without storing them.
More precisely, the convergence of three processes with online standardized data is studied in sections 2, 3, 4 respectively.
First, in section 2, a process with a variable step-size an and use of several online standardized observations at each step is studied; note that the number of observations at each step may vary with n.
Secondly, in section 3, an averaged process with a constant step-size and use of a varying number of online standardized observations at each step is studied.
Thirdly, in section 4, a process with a constant or variable step-size and use of all online standardized observations until the current step to estimate B and F is studied.
These three processes are tested on several datasets when q = 1, S being a continuous or binary variable, and compared to existing processes in section 5. Note that when S is a binary variable, linear regression is equivalent to a linear discriminant analysis. It appears that the third-defined process most often yields the best results for the same number of observations used or for the same duration of computing time used.
These processes belong to the family of stochastic gradient processes and are adapted to data streams. Batch gradient and stochastic gradient methods are presented and compared in [7] and reviewed in [8], including noise reduction methods, like dynamic sample sizes methods, stochastic variance reduced gradient (also studied in [9]), second-order methods, ADAGRAD [10] and other methods. This work makes the following contributions to the variance reduction methods:
- In [9], the authors proposed a modification of the classical stochastic gradient algorithm to reduce directly the gradient of the function to be optimized in order to obtain a faster convergence. It is proposed in this article to reduce this gradient by an online standardization of the data.
- Gradient clipping [11] is another method to avoid a numerical explosion. The idea is to limit the norm of the gradient to a maximum number called threshold. This number must be chosen, a bad choice of threshold can affect the computing speed. Moreover it is then necessary to compare the norm of the gradient to this threshold at each step. In our approach the limitation of the gradient is implicitly obtained by online standardization of the data.
- If the expectation and the variance of the components of R and S were known, standardization of these variables could be made directly and convergence of the processes obtained using existing theorems. But these moments are unknown in the case of a data stream and are estimated online in this study. Thus the assumptions of the theorems of almost sure (a.s.) convergence of the processes studied in sections 2 and 3 and the corresponding proofs are more general than the classical ones in the linear regression case [1–5].
- The process defined in section 4 is not a classical batch method. Indeed in this type of method (gradient descent), the whole set of data is known a priori and is used at each step of the process. In the present study, new data are supposed to arrive at each step, as in a data stream, and are added to the preceding set of data, thus reducing by averaging the variance. This process can be considered as a dynamic batch method.
- A suitable choice of step-size is often crucial for obtaining good performance of a stochastic gradient process. If the step-size is too small, the convergence will be slower. Conversely, if the step-size is too large, a numerical explosion may occur during the first iterations. Following [6], a very simple choice of the step-size is proposed for the methods with a constant step-size.
- Another objective is to reduce computing time in order to take into account a maximum of data in the case of a data stream. It appears in the experiments that the use of all observations until the current step without storing them, several observations being introduced at each step, increases at best in general the convergence speed of the process. Moreover this can reduce the influence of outliers.
As a whole the major contributions of this work are to reduce gradient variance by online standardization of the data or use of a “dynamic” batch process, to avoid numerical explosions, to reduce computing time and consequently to better adapt the stochastic approximation processes used to the case of a data stream.
2 Convergence of a process with a variable step-size
Let (Bn, n ≥ 1) and (Fn, n ≥ 1) be two sequences of random matrices in and
respectively. In this section, the convergence of the process (Xn, n ≥ 1) in
recursively defined by
and its application to sequential linear regression are studied.
2.1 Theorem
Let X1 be a random variable in independent from the sequence of random variables ((Bn, Fn), n ≥ 1) in
.
Denote Tn the σ-field generated by X1 and (B1, F1),…,(Bn−1, Fn−1). X1, X2,…,Xn are Tn-measurable.
Let (an) be a sequence of positive numbers.
Make the following assumptions:
(H1a) There exists a positive definite symmetrical matrix B such that a.s.
1)
2) .
(H2a) There exists a matrix F such that a.s.
1)
2) .
(H3a) .
Theorem 1 Suppose H1a, H2a and H3a hold. Then Xn converges to θ = B−1 F a.s.
State the Robbins-Siegmund lemma [12] used in the proof.
Lemma 2 Let (Ω, A, P) be a probability space and (Tn) a non-decreasing sequence of sub-σ-fields of A. Suppose for all n, zn, αn, βn and γn are four integrable non-negative Tn-measurable random variables defined on (Ω, A, P) such that:
Then, in the set
, (zn) converges to a finite random variable and
a.s.
Proof of Theorem 1. The Frobenius norm ‖A‖ for a matrix A is used. Recall that, if ‖A‖2 denotes the spectral norm of A, ‖AB‖ ≤ ‖A‖2‖B‖.
Denote Zn = (Bn − B)Xn − (Fn − F) = (Bn − B)(Xn − θ) + (Bn − B)θ − (Fn − F) and . Then:
Denote λ the smallest eigenvalue of B. As an → 0, we have for n sufficiently large
Then, taking the conditional expectation with respect to Tn yields almost surely:
Applying Robbins-Siegmund lemma under assumptions H1a, H2a and H3a implies that there exists a non-negative random variable T such that a.s.
As , T = 0 a.s. ∎
A particular case with the following assumptions is now studied.
(H1a’) There exist a positive definite symmetrical matrix B and a positive real number b such that a.s.
1) for all n, E[Bn|Tn] = B
2) E[‖Bn − B‖2|Tn] < b.
(H2a’) There exist a matrix F and a positive real number d such that a.s.
1) for all n, E[Fn|Tn] = F
2) E[‖Fn − F‖2|Tn] < d.
(H3a’) Denoting λ the smallest eigenvalue of B,
or
.
Theorem 3 Suppose H1a’, H2a’ and H3a’ hold. Then Xn converges to θ almost surely and in quadratic mean. Moreover .
Proof of Theorem 3. In the proof of theorem 1, take βn = 0, δn = 0, bn < b, dn < d; then a.s.:
Taking the mathematical expectation yields:
As , t = 0. Therefore, there exist
and f > 0 such that for n > N:
Applying a lemma of Schmetterer [13] for with
yields:
Applying a lemma of Venter [14] for with
yields:
2.2 Application to linear regression with online standardized data
Let (R1, S1),…,(Rn, Sn),… be an i.i.d. sample of a random vector (R, S) in . Let Γ (respectively Γ1) be the diagonal matrix of order p (respectively q) of the inverses of the standard deviations of the components of R (respectively S).
Define the correlation matrices
Suppose that B−1 exists. Let θ = B−1 F.
Denote (respectively
) the mean of the n-sample (R1, R2,…,Rn) of R (respectively (S1, S2,…,Sn) of S).
Denote the variance of the n-sample
of the jth component Rj of R, and
the variance of the n-sample
of the kth component Sk of S.
Denote Γn (respectively ) the diagonal matrix of order p (respectively q) whose element (j, j) (respectively (k, k)) is the inverse of
(respectively
).
Let (mn, n ≥ 1) be a sequence of integers. Denote for n ≥ 1, M0 = 0 and In = {Mn−1+1,…,Mn}.
Define recursively the process (Xn, n ≥ 1) in by
Corollary 4 Suppose there is no affine relation between the components of R and the moments of order 4 of (R, S) exist. Suppose moreover that assumption H3a” holds:
(H3a”)
Then Xn converges to θ a.s.
This process was tested on several datasets and some results are given in section 5 (process S11 for mn = 1 and S12 for mn = 10).
The following lemma is first proved.
Lemma 5 Suppose the moments of order 4 of R exist and an > 0, . Then
and
a.s.
Proof of Lemma 5. The usual Euclidean norm for vectors and the spectral norm for matrices are used in the proof.
Step 1:
It follows that a.s.
Likewise a.s.
Denote the centered moment of order 4 of Rj. We have:
As a.s., j = 1,…,p, this implies:
Proof of Corollary 4.
Step 1: prove that assumption H1a1 of theorem 1 is verified.
As ΓMn−1 and are Tn-measurable and
, j ∈ In, is independent of Tn, with
:
As ΓMn−1 and converge respectively to Γ and 0 a.s. and by lemma 5,
and
a.s., it follows that
a.s.
Step 2: prove that assumption H1a2 of theorem 1 is verified.
As ΓMn−1 and converge respectively to Γ and 0 a.s., and
, it follows that
a.s.
Step 3: the proofs of the verification of assumptions H2a1 and H2a2 of theorem 1 are similar to the previous ones, Bn and B being respectively replaced by
3 Convergence of an averaged process with a constant step-size
In this section, the process (Xn, n ≥ 1) with a constant step-size a and the averaged process (Yn, n ≥ 1) in are recursively defined by
The a.s. convergence of (Yn, n ≥ 1) and its application to sequential linear regression are studied.
3.1 Lemma
Lemma 6 Let three real sequences (un), (vn) and (an), with un > 0 and an > 0 for all n, and a real positive number λ such that, for n ≥ 1,
Suppose:
- 1) vn → 0
- 2)
or
.
- Under assumptions 1 and 2, un → 0.
Proof of Lemma 6. In the case an depending on n, as an → 0, we can suppose without loss of generality that 1 − an λ > 0 for n ≥ 1. We have:
Now, for n1 ≤ n2 ≤ n and 0 < ci < 1 with ci = ai λ for all i, we have:
Let ϵ > 0. There exists N such that for i > N, . Then for n ≥ N, applying the previous inequality with ci = ai λ, n1 = 1, n2 = N, yields:
In the case an depending on n, ln(1 − ai λ) ∼ −ai λ as ai → 0(i → ∞); then, as ,
.
In the case an = a, as 0 < 1 − aλ < 1.
Thus there exists N1 such that un+1 < ϵ for n > N1 ∎
3.2 Theorem
Make the following assumptions
(H1b) There exist a positive definite symmetrical matrix B in and a positive real number b such that a.s.
1) limn → ∞(E[Bn|Tn] − B) = 0
2)
3) supn E[‖Bn−B‖2|Tn] ≤ b.
(H2b) There exist a matrix F in and a positive real number d such that a.s.
1) limn→∞(E[Fn|Tn] − F) = 0
2) supn E [‖Fn − F‖2|Tn] ≤ d.
(H3b) λ and λmax being respectively the smallest and the largest eigenvalue of B, .
Theorem 7 Suppose H1b, H2b and H3b hold. Then Yn converges to θ = B−1 F a.s.
Remark 1 Györfi and Walk [5] proved that Yn converges to θ a.s. and in quadratic mean under the assumptions E[Bn|Tn] = B, E[Fn|Tn] = F, H1b2 and H2b2. Theorem 7 is an extension of their a.s. convergence result when E[Bn|Tn] → B and E[Fn|Tn] → F a.s.
Remark 2 Define
,
, F = E[R1 S′]. If ((R1n, Sn), n ≥ 1) is an i.i.d. sample of (R1, S) whose moments of order 4 exist, assumptions H1b and H2b are verified for
and
as
and
.
Step 1: give a sufficient condition to have a.s.
We have (cf. proof of theorem 1):
Take now the Frobenius norm of :
Under H3b, all the eigenvalues of I − aB are positive and the spectral norm of I − aB is equal to 1 − aλ. Then:
By lemma 6, it suffices to prove a.s. to conclude
a.s.
Step 2: prove that assumptions H1b and H2b imply respectively and
a.s.
The proof is only given for (Bn), the other one being similar.
Assumption H1b3 implies supn E[‖Bn − B‖2] < ∞. It follows that, for each element and Bkl of Bn and B respectively,
. Therefore:
As a.s. by H1b1, we have for each (k, l)
Then a.s.
Step 3: prove now that a.s.
Denote βn = ‖E[Bn|Tn] − B‖ and γn = ‖E[Fn|Tn] − F‖. βn → 0 and γn → 0 a.s. under H1b1 and H2b1. Then: ∀δ > 0, ∀ε > 0, ∃N(δ, ε): ∀n ≥ N(δ, ε),
Let ε be fixed. Denote N0 = N(δ, ε) and, for n > N0,
Remark that Gn is Tn-measurable and, IG denoting the indicator of G,
As the spectral norm ‖I − aB‖ = 1 − aλ, taking the conditional expectation with respect to Tn yields a.s.
As , taking mathematical expectation yields:
As by the choice of δ, this implies
.
This implies by the Kronecker lemma:
In G, IGj = 1 for all j, therefore a.s. Then:
. This is true for every ε > 0. Thus:
Therefore by step 2 and step 1, we conclude that and
a.s. ∎
3.3 Application to linear regression with online standardized data
Denote U = (R − E[R])(R − E[R])′, B = ΓE[U]Γ the correlation matrix of R, λ and λmax respectively the smallest and the largest eigenvalue of B, b1 = E[‖ΓUΓ − B‖2], F = ΓE[(R − E[R])(S − E[S])′]Γ1.
Corollary 8 Suppose there is no affine relation between the components of R and the moments of order 4 of (R,S) exist. Suppose H3b1 holds:
(H3b1) .
Then Yn converges to θ = B−1F a.s.
This process was tested on several datasets and some results are given in section 5 (process S21 for mn = 1 and S22 for mn = 10).
Proof of Corollary 8.
Step 1: introduction.
Using the decomposition of E[Bn|Tn] − B established in the proof of corollary 4, as and ΓMn − 1 ⟶ Γ a.s., it is obvious that E[Bn|Tn] − B ⟶ 0 a.s. Likewise E[Fn|Tn] − F ⟶ 0 a.s. Thus assumptions H1b1 and H2b1 are verified.
Suppose that Yn does not converge to θ almost surely.
Then there exists a set of probability ε1 > 0 in which Yn does not converge to θ.
Denote , j = 1,…,p.
As ,
, j = 1,…,p and ΓMn − 1 − Γ ⟶ 0 almost surely, there exists a set G of probability greater than
in which these sequences of random variables converge uniformly to θ.
Step 2: prove that .
By step 2 of the proof of lemma 5, we have for n > N:
As in G, converges uniformly to σj for j = 1,…,p, there exists c > 0 such that
Then there exists d > 0 such that
Therefore .
Step 3: prove that assumption H1b2 is verified in G.
Using the decomposition of E[Bn|Tn] − B given in step 1 of the proof of corollary 4, with Rc = R − E[R] and yields a.s.:
As in G, ΓMn−1 − Γ and converge uniformly to 0, E[Bn|Tn] − B converges uniformly to 0. Moreover there exists c1 > 0 such that
By the proof of lemma 5: ; then
.
By step 2: .
Then: .
As E[Bn|Tn] − B converges uniformly to 0 on G, we obtain:
Thus assumption H1b2 is verified in G.
Step 4: prove that assumption H1b3 is verified in G.
Denote Rc = R − E[R], ,
. Consider the decomposition:
As random variables , j ∈ In, are independent of Tn, as ΓMn−1 and
are Tn-measurable and converge uniformly respectively to Γ and 0 on G, E[‖αn‖2 IG|Tn] converges uniformly to 0. Then, for δ > 0, there exists N1 such that for n > N1, E[‖αn‖2 IG|Tn] ≤ δ a.s.
Moreover, denoting U = RcRc′ and , we have, as the random variables Uj form an i.i.d. sample of U:
Thus assumption H1b3 is verified in G.
As and
almost surely, it can be proved likewise that there exist a set H of probability greater than
and d > 0 such that E[‖Fn − F‖2 IH|Tn] ≤ d a.s. Thus assumption H2b2 is verified in H.
Step 5: conclusion.
As ,
.
Thus assumption H3b is verified.
Applying theorem 7 implies that Yn converges to θ almost surely in H ∩ G.
Therefore P(Yn ⟶ θ) ≥ P(H ∩ G) > 1 − ε1.
This is in contradiction with . Thus Yn converges to θ a.s. ∎
4 Convergence of a process with a variable or constant step-size and use of all observations until the current step
In this section, the convergence of the process (Xn, n ≥ 1) in recursively defined by
and its application to sequential linear regression are studied.
4.1 Theorem
Make the following assumptions
(H1c) There exists a positive definite symmetrical matrix B such that Bn ⟶ B a.s.
(H2c) There exists a matrix F such that Fn ⟶ F a.s.
(H3c) λmax denoting the largest eigenvalue of B,
or
.
Theorem 9 Suppose H1c, H2c and H3c hold. Then Xn converges to B−1F a.s.
Proof of Theorem 9.
Denote θ = B−1F, , Zn = (Bn − B)θ − (Fn − F). Then:
Let ω be fixed belonging to the intersection of the convergence sets {Bn ⟶ B} and {Fn ⟶ F}. The writing of ω is omitted in the following.
Denote ‖A‖ the spectral norm of a matrix A and λ the smallest eigenvalue of B.
In the case an depending on n, as an ⟶ 0, we can suppose without loss of generality for all n. Then all the eigenvalues of I − anB are positive and ‖I − anB‖ = 1 − anλ.
Let 0 < ε < λ. As Bn − B ⟶ 0, we obtain for n sufficiently large:
As Zn ⟶ 0, applying lemma 6 yields .
Therefore Xn ⟶ B−1F a.s. ∎
4.2 Application to linear regression with online standardized data
Let (mn, n ≥ 1) be a sequence of integers. Denote for n ≥ 1, M0 = 0 and In = {Mn − 1 + 1,…,Mn}.
As ((Rn, Sn), n ≥ 1) is an i.i.d. sample of (R, S), assumptions H1c and H2c are obviously verified with B = ΓE[(R − E[R])(R − E[R])′]Γ and F = ΓE[(R − E[R])(S − E[S])′]Γ1. Then:
Corollary 10 Suppose there is no affine relation between the components of R and the moments of order 4 of (R, S) exist. Suppose H3c holds. Then Xn converges to B−1F a.s.
Remark 3 B is the correlation matrix of R of dimension p. Then λmax < Trace(B) = p. In the case of a constant step-size a, it suffices to take to verify H3c.
Remark 4 In the definition of Bn and Fn, the Rj and the Sj are not directly pseudo-centered with respect to and
respectively. Another equivalent definition of Bn and Fn can be used. It consists of replacing Rj by Rj − m,
by
, Sj by Sj − m,
by
, m and m1 being respectively an estimation of E[R] and E[S] computed in a preliminary phase with a small number of observations. For example, at step n,
is computed instead of
. This limits the risk of numerical explosion.
This process was tested on several datasets and some results are given in section 5 (with a variable step-size: process S13 for mn = 1 and S14 for mn = 10; with a constant step-size: process S31 for mn = 1 and S32 for mn = 10).
5 Experiments
The three previously-defined processes of stochastic approximation with online standardized data were compared with the classical stochastic approximation and averaged stochastic approximation (or averaged stochastic gradient descent) processes with constant step-size (denoted ASGD) studied in [5] and [6]. A description of the methods along with abbreviations and parameters used is given in Table 1.
With the variable S set at dimension 1, 11 datasets were considered, some of which are available in free access on the Internet, while others were derived from the EPHESUS study [15]: 6 in regression (continuous dependent variable) and 5 in linear discriminant analysis (binary dependent variable). All datasets used in our experiments are presented in detail in Table 2, along with their download links. An a priori selection of variables was performed on each dataset using a stepwise procedure based on Fisher’s test with p-to-enter and p-to-remove fixed at 5 percent.
Let D = {(ri, si), i = 1, 2,…,N} be the set of data in and assuming that it represents the set of realizations of a random vector (R, S) uniformly distributed in D, then minimizing E[(S − θ′ R − η)2] is equivalent to minimizing
. One element of D (or several according to the process) is randomly drawn at each step to iterate the process.
To compare the methods, two different studies were performed: one by setting the total number of observations used, the other by setting the computing time.
The choice of step-size, the initialization for each method and the convergence criterion used are respectively presented and commented below.
Choice of step-size
In all methods of stochastic approximation, a suitable choice of step-size is often crucial for obtaining good performance of the process. If the step-size is too small, the convergence rate will be slower. Conversely, if the step-size is too large, a numerical explosion phenomenon may occur during the first iterations.
For the processes with a variable step-size (processes C1 to C4 and S11 to S14), we chose to use an of the following type:
The constant was fixed, as suggested by Xu [16] in the case of stochastic approximation in linear regression, and b = 1. The results obtained for the choice
are presented although the latter does not correspond to the best choice for a classical method.
For the ASGD method (A1, A2), two different constant step-sizes a as used in [6] were tested: and
, T2 denoting the trace of E[RR′]. Note that this choice of constant step-size assumes knowing a priori the dataset and is not suitable for a data stream.
For the methods with standardization and a constant step-size a (S21, S22, S31, S32), was chosen since the matrix E[RR′] is thus the correlation matrix of R, whose trace is equal to p, such that this choice corresponds to that of [6].
Initialization of processes
All processes (Xn) were initialized by , the null vector. For the processes with standardization, a small number of observations (n = 1000) were taken into account in order to calculate an initial estimate of the means and standard deviations.
Convergence criterion
The “theoretical vector” θ1 is assigned as that obtained by the least square method in D such that . Let
be the estimator of θ1 obtained by stochastic approximation after n iterations.
In the case of a process (Xn) with standardized data, which yields an estimation of the vector denoted θc in section 1 as θ = Γθc(Γ1)−1 and η = E[S] − θ′ E[R], we can define:
To judge the convergence of the method, the cosine of the angle formed by exact θ1 and its estimation was used as criterion,
Other criteria, such as or
, f being the loss function, were also tested, although the results are not presented in this article.
5.1 Study for a fixed total number of observations used
For all N observations used by the algorithm (N being the size of D) up to a maximum of 100N observations, the criterion value associated with each method and for each dataset was recorded. The results obtained after using 10N observations are provided in Table 3.
As can be seen in Table 3, a numerical explosion occured in most datasets using the classical methods with raw data and a variable step-size (C1 to C4). As noted in Table 2, these datasets had a high T2 = Tr(E[RR′]). Corresponding methods S11 to S14 using the same variable step-size but with online standardized data quickly converged in most cases. However classical methods with raw data can yield good results for a suitable choice of step-size, as demonstrated by the results obtained for POLY dataset in Fig 1. The numerical explosion can arise from a too high step-size when n is small. This phenomenon can be avoided if the step-size is reduced, although if the latter is too small, the convergence rate will be slowed. Hence, the right balance must be found between step-size and convergence rate. Furthermore, the choice of this step-size generally depends on the dataset which is not known a priori in the case of a data stream. In conclusion, methods with standardized data appear to be more robust to the choice of step-size.
The ASGD method (A1 with constant step-size and A2 with
) did not yield good results except for the RINGNORM and TWONORM datasets which were obtained by simulation (note that all methods functioned very well for these two datasets). Of note, A1 exploded for the QUANTUM dataset containing 1068 observations (2.1%) whose L2 norm was fivefold greater than the average norm (Table 2). The corresponding method S21 with online standardized data yielded several numerical explosions with the
step-size, however these explosions disappeared when using a smaller step-size (see Fig 1). Of note, it is assumed in corollary 8 that
; in the case of
, only
is certain.
Finally, for methods S31 and S32 with standardized data, the use of all observations until the current step and the very simple choice of the constant step-size uniformly yielded good results.
Thereafter, for each fixed number of observations used and for each dataset, the 14 methods ranging from the best (the highest cosine) to the worst (the lowest cosine) were ranked by assigning each of the latter a rank from 1 to 14 respectively, after which the mean rank in all 11 datasets was calculated for each method. A total of 100 mean rank values were calculated for a number of observations used varying from N to 100N. The graph depicting the change in mean rank based on the number of observations used and the boxplot of the mean rank are shown in Fig 2.
Overall, for these 11 datasets, a method with standardized data, a constant step-size and use of all observations until the current step (S31, S32) represented the best method when the total number of observations used was fixed.
5.2 Study for a fixed processing time
For every second up to a maximum of 2 minutes, the criterion value associated to each dataset was recorded. The results obtained after a processing time of 1 minute are provided in Table 4.
The same conclusions can be drawn as those described in section 5.1 for the classical methods and the ASGD method. The methods with online standardized data typically faired better.
As in the previous study in section 5.1, the 14 methods were ranked from the best to the worst on the basis of the mean rank for a fixed processing time. The graph depicting the change in mean rank based on the processing time varying from 1 second to 2 minutes as well as the boxplot of the mean rank are shown in Fig 3.
As can be seen, these methods with online standardized data using more than one observation per step yielded the best results (S32, S22). One explanation may be that the total number of observations used in a fixed processing time is higher when several observations are used per step rather than one observation per step. This can be verified in Table 5 in which the total number of observations used per second for each method and for each dataset during a processing time of 2 minutes is given. Of note, the number of observations used per second in a process with standardized data and one observation per step (S11, S13, S21, S31) was found to be generally lower than in a process with raw data and one observation per step (C1, C3, A1, A2), since a method with standardization requires the recursive estimation of means and variances at each step.
Of note, for the ADULT dataset with a large number of parameters selected (95), the only method yielding sufficiently adequate results after a processing time of one minute was S32, and methods S31 and S32 when 10N observations were used.
6 Conclusion
In the present study, three processes with online standardized data were defined and for which their a.s. convergence was proven.
A stochastic approximation method with standardized data appears to be advantageous compared to a method with raw data. First, it is easier to choose the step-size. For processes S31 and S32 for example, the definition of a constant step-size only requires knowing the number of parameters p. Secondly, the standardization usually allows avoiding the phenomenon of numerical explosion often obtained in the examples given with a classical method.
The use of all observations until the current step can reduce the influence of outliers and increase the convergence rate of a process. Moreover, this approach is particularly adapted to the case of a data stream.
Finally, among all processes tested on 11 different datasets (linear regression or linear discriminant analysis), the best was a method using standardization, a constant step-size equal to and all observations until the current step, and the use of several new observations at each step improved the convergence rate.
References
- 1. Monnez JM. Le processus d’approximation stochastique de Robbins-Monro: résultats théoriques; estimation séquentielle d’une espérance conditionnelle. Statistique et Analyse des Données. 1979;4(2):11–29.
- 2. Ljung L. Analysis of stochastic gradient algorithms for linear regression problems. IEEE Transactions on Information Theory. 1984;30(2):151–160.
- 3. Polyak BT. New method of stochastic approximation type. Automation and remote control. 1990;51(7):937–946.
- 4. Polyak BT, Juditsky AB. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization. 1992;30(4):838–855.
- 5. Györfi L, Walk H. On the averaged stochastic approximation for linear regression. SIAM Journal on Control and Optimization. 1996;34(1):31–61.
- 6. Bach F, Moulines E. Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n). Advances in Neural Information Processing Systems. 2013;773–781.
- 7. Bottou L, Le Cun Y. On-line learning for very large data sets. Applied Stochastic Models in Business and Industry. 2005;21(2):137–151.
- 8.
Bottou L, Curtis FE, Noceda J. Optimization Methods for Large-Scale Machine Learning. arXiv:1606.04838v2. 2017.
- 9. Johnson R, Zhang Tong. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction. Advances in Neural Information Processing Systems. 2013:315–323.
- 10. Duchi J, Hazan E, Singer Y. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research. 2011;12:2121–2159.
- 11.
Pascanu R, Mikolov T, Bengio Y. Understanding the exploding gradient problem. arXiv:1211.5063v1. 2012.
- 12.
Robbins H, Siegmund D. A convergence theorem for nonnegative almost supermartingales and some applications. Optimizing Methods in Statistics, Rustagi J.S. (ed.), Academic Press, New York. 1971;233–257.
- 13.
Schmetterer L. Multidimensional stochastic approximation. Multivariate Analysis II, Proc. 2nd Int. Symp., Dayton, Ohio, Academic Press. 1969;443–460.
- 14. Venter JH. On Dvoretzky stochastic approximation theorems. The Annals of Mathematical Statistics. 1966;37:1534–1544.
- 15. Pitt B., Remme W., Zannad F. et al. Eplerenone, a selective aldosterone blocker, in patients with left ventricular dysfunction after myocardial infarction. New England Journal of Medicine. 2003;348(14):1309–1321. pmid:12668699
- 16.
Xu W. Towards optimal one pass large scale learning with averaged stochastic gradient descent. arXiv:1107.2490v2. 2011.