Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Financial time series forecasting using twin support vector regression

  • Deepak Gupta,

    Roles Conceptualization, Data curation, Investigation, Writing – original draft

    Affiliation Department of Electronics and Computer Engineering, National Institute of Technology, Arunachal Pradesh, India

  • Mahardhika Pratama ,

    Roles Funding acquisition, Validation, Writing – review & editing

    mpratama@ntu.edu.sg

    Affiliation School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore

  • Zhenyuan Ma,

    Roles Writing – review & editing

    Affiliation School of Mathematics and System Sciences, Guangdong Polytechnic Normal University, Guangzhou, China

  • Jun Li,

    Roles Supervision, Writing – review & editing

    Affiliation Centre for Artificial Intelligence, School of Software, Faculty of Engineering and Technology, University of Technology Sydney, Sydney, Australia

  • Mukesh Prasad

    Roles Formal analysis, Validation, Visualization, Writing – review & editing

    Affiliation Centre for Artificial Intelligence, School of Software, Faculty of Engineering and Technology, University of Technology Sydney, Sydney, Australia

Abstract

Financial time series forecasting is a crucial measure for improving and making more robust financial decisions throughout the world. Noisy data and non-stationarity information are the two key factors in financial time series prediction. This paper proposes twin support vector regression for financial time series prediction to deal with noisy data and nonstationary information. Various interesting financial time series datasets across a wide range of industries, such as information technology, the stock market, the banking sector, and the oil and petroleum sector, are used for numerical experiments. Further, to test the accuracy of the prediction of the time series, the root mean squared error and the standard deviation are computed, which clearly indicate the usefulness and applicability of the proposed method. The twin support vector regression is computationally faster than other standard support vector regression on the given 44 datasets.

Introduction

For the last two decades in the machine learning area, support vector machines (SVMs) have been a computationally powerful kernel-based tool for various classification problems, such as pattern recognition and regression problems and function approximations [1]. It has the advantages over other methods, such as artificial neural networks (ANN), which focus on minimizing the empirical risk in the training phase, whereas SVM was developed on the structural risk minimization principle [1], which minimizes the upper bound on the generalization error. Another advantage of SVM is that it forms a convex optimization problem, a single large quadratic programming problem (QPP) that yields a unique global solution. The SVM has been applied in many fields to solve various well-known real-world problems ranging from image classification [2], remote sensing image classification [3], text characterization [4], biomedicine [5, 6], time series prediction [7, 8] and business prediction [9], which clearly justify its popularity.

To obtain an optimal regressor function for a given set of training data, support vector regression (SVR) was introduced by Vapnik [1], where training data points are in the input space or in a higher dimensional space via kernel mapping. The SVR has the advantage of better generalization performance than the other regression methods. However, standard SVM has a drawback in that it optimizes a computationally expensive cost function for large-scale datasets that have high training costs, i.e., O(m3), where m is the number of training samples. Due to this high training cost, it is not easy to find the optimal parameters from a large set of parameters. To address this issue, different variants of SVM have been proposed, such as chunking and decomposition methods [10, 11], exact SVM training algorithm SMO [12], approximate SVM training algorithms [1315] and LS-SVM [16, 17].

Mangasarian and Wild [18] suggested a new method for binary classification as a generalized eigenvalue proximal support vector machine (GEPSVM) based on two nonparallel hyperplanes. To find the nonparallel hyperplanes, GEPSVM solves two eigenvalue problems based on the size of the input space dimensions. The GEPSVM outperforms the standard SVM in terms of computational speed and accuracy. Similarly, in the spirit of GEPSVM, twin support vector machine (TWSVM) has recently been proposed [19] for binary classification problems that consist of two nonparallel planes, for example, where each plane is closer to the data points of one of the two classes and as far as possible from the data points of the other class. In TWSVM, two QPPs of smaller size are solved to obtain two nonparallel hyperplanes instead of a QPP of large size. This strategy gives TWSVM good generalization ability, making it better than GEPSVM and approximately four times faster than the standard SVM. The main difference between GEPSVM and TWSVM is that GEPSVM solves two generalized eigenvalue problems to obtain the hyperplanes because TWSVM solves two related SVM-type problems to obtain the hyperplanes. Peng [20] recently proposed a twin support vector regression technique based on TWSVM in which an unknown regressor function is generated by the construction of nonparallel insensitive up and down bound functions. In this case, it solves a pair of two smaller sized QPPs unlike the large QPP solved in the case of SVR. To find the solution to this problem through machine learning approaches, various methods have been applied, such as artificial neural networks [21], statistical learning [22], fuzzy logic [2326], neural networks [2729], evolutionary algorithms [30] and hidden Markov models [31]. Eugene et al. [32], estimated that the factors for high expected returns that are due to future price increases are only offset through the decrementing of the current price. Therefore, expected returns based on the variable time generate temporary subsets of different prices. Lewellen et al. [33] proposed an approach for testing the prediction of aggregate financial ratios, named predictive regression, on small-scale sample biases. Goh et al. [34] tried to find the relationship between the U.S. and Chinese economic variables and predicted the economic variable for each country that justifies which country’s economic variables are greater than others. In 2017, Shen et al. [35] presented a novel method for predicting the Chinese stock returns for different asset values using the Baidu index. Similarly, Li et al. (2018) [36] found that idiosyncratic volatility significantly grows when internet stock message boards are already built up.

The prediction of stock market indices has been the focus of interest from the day the stock market came into existence. Researchers have several goals and motivations for trying to predict stock market prices. One of the motivations could be to make life easier and more luxurious. Many investment professionals, along with researchers, are trying to find a superior system that will yield high returns in terms of financial gain. There has been considerable work performed to predict the behavior of the stock market. To perform the financial time series prediction, various parameters are involved: (a) price of the last trade performed during the day, (b) total number of commodities traded during the day, and (c) lowest and highest traded price [37]. Because of these parameters, the nonlinearity and uncertainty involved in the prediction of financial time series forecasting, this paper proposes TSVR to address these situations. To determine the effectiveness of TSVR on financial time series datasets, first, this paper discusses the formulation of TSVR and then the performance of the numerical experiments for various financial datasets. The experimental results of TSVR are compared with the standard SVR formulation with accuracy in terms of average RMSE and training time.

The remainder of this paper is organized as follows: Sections 2 and 3 discuss the formulation of SVR and TSVR, respectively. Section 4 shows the experimental results on different financial time series datasets of TSVR and comparison results with SVR. Finally, conclusions are drawn in section 5.

Support vector regression

This section describes the standard formulation of support vector regression (SVR). Assume that a set of training samples is {(x1,y1)}i = 1,2,…,m where xi = (xi1,xi2,…,xin)tRn is the input example and yiR is the target value for i = 1,2,…,m, where m corresponds to input training samples. Let matrix DRm×n denote the input examples where is the i-th row and y = (y1,…,ym)t is the vector of observed values. The main goal of SVR is to approximate the regression function f(.) in the form (1) where unknowns w is the vector and b is a scalar value.

Vapnik [1] suggested the formulations of SVR by introducing the ε-insensitive loss function and determining the unknown variables w and b by solving the following QPP: subject to: and (2) where ξ1 = (ξ1i,…,ξ1m)t, ξ2 = (ξ21,…,ξ2m)t are slack variables in vector form, and C>0 and ε>0 denote the input parameters.

Here, the solution of the above problem is obtained by introducing Lagrange multipliers subject to: (3) where the Lagrange multipliers are λ1 = (λ11,…,λ1m)t and λ2 = (λ21,…,λ2m)t in Rm, which give the solution to the above quadratic problem. Here, nonzero values of Lagrangian multipliers, which are known as support vectors in Eq (3) are useful for predicting the regression function, which is defined for any xRn as (4) For a nonlinear regressor, the input data maps to a higher dimensional feature space using a kernel function k (.,.) which is defined by the Gaussian kernel as k(xi,xj) = exp(−μxixj2) for i, j = 1,2,…,m and μ is a parameter. The nonlinear case can be obtained as subject to: (5) The nonlinear prediction function f (.) is given by finding the value of λ1 and λ2 from the solution of the problem mentioned in Eq (5) for any xRn,

Twin support vector machine

To further improve the generalization performance and training time of SVR, a new approach was discussed by Peng [20], termed TSVR. The TSVR constructs a pair of nonparallel hyperplanes such that one of the hyperplanes determines the ε-insensitive downbound f1(x) = xtw1+b1 and another ε-insensitive upbound function f2(x) = xtw2+b2 to identify the end regression function. The TSVR solves a pair of smaller QPPs of m constraints to identify the solution instead of solving a single large QPP with a 2 m number of constraints.

The formulation of TSVR determines the regression function by the following pair of constrained QPPs as: subject to: (6) subject to: (7) where C1,C2>0 and ε1,ε2≥0 denote input parameters, ξ = (ξ1,…ξm)t and η = (η1,…ηm)t denote the vector of slack variables.

To find the solution of the above primal-based QPPs shown in Eqs (6) and (7), we convert the QPPs into dual forms by using the Lagrange multipliers λ1 = (λ11,…λ1m)t, ν1 = (ν11,…ν1m)t and λ2 = (λ21,…λ2m)t, ν2 = (ν21,…ν2m)t. The Lagrangian functions of Eqs (6) and (7) are given by Eqs (8) and (9), respectively. (8) (9) By applying the KKT conditions for the Lagrangian function as shown in Eq (8), we obtain: (10) (11) (12) (13) (14) (15) Since ν1≥0, we have (16) Similarly, for the Lagrangian function as shown in Eq (9), we obtain (17) (18) (19) (20) (21) (22) Since ν2≥0, we have (23) Combining Eq (10) with Eq (11) and Eq (17) with Eq (18), we obtain (24) (25) Let us define, (26) and then we have, i.e., (27) and , i.e., (28) Here, note that StS is positive semidefinite, but to overcome the situation in which its inverse does not exist, σI is introduced as a regularization term, so that (StS+σI) becomes positive definite where σ is a very small positive number, such as σ = Ie-7. Thus, we have (29) (30) Substituting Eq (29) into the primal Lagrangian function Eq (8) and using Eqs (13) to (16), the dual problem of Eq (6) is obtained as subject to: (31) Similarly, substituting Eq (30) into the primal Lagrangian function Eq (9) and using Eq (20) to (23), the dual problem of Eq (7) is obtained as subject to: (32) The vectors λ1 and λ2 are calculated by solving the dual QPPs Eqs (31) and (32). Finally, in the output for any data point xRn, the end regressor f(.) is given by: (33) To extend TSVR to a nonlinear case, TSVR finds the regression function by solving the following primal problems: subject to: (34) and subject to: (35) where the kernel matrix K(D,Dt) of order m whose (i, j) element is given by K(D,Dt)ij = k(xi,xj)∈R, and where k(xi,xj) is a nonlinear kernel function. For a vector xRn, we define in a similar manner, the dual formulations of QPPs Eqs (34) and (35) are given by Eqs (36) and (37), respectively. subject to: (36) and subject to: (37) where T = [K(D,Dt) e]. After resolving Eqs (36) and (37), we find the value of u1 and u2 as (38) (39) Finally, for any data sample xRn, the end regression function f(.) is given by: (40)

Numerical experiments

In this section, various numerical experiments are conducted to test the generalization performance and the computational efficiency of the TSVR on standard datasets and compared with SVR. This paper considered 44 benchmark datasets and divided them into two groups. The first group has a combination of 24 individual company stocks, and the second group has 20 stock market index datasets from the Yahoo financial website, i.e., http://finance.yahoo.com [38]. Individual company stock datasets are AT&T Inc. (T), Infosys Limited (INFY), Apple, Inc. (AAPL), Facebook, Inc. (FB), Cisco Systems, Inc. (CSCO), Alphabet, Inc. (Goog), Citigroup, Inc. (C), HSBC Holding Plc (HSBC), ICICI Bank, Ltd. (IBN), Royal Bank of Canada (RY), Royal Bank of Scotland (RBS), State Bank of India (SBIN.NS), Punjab National Bank (PNB.NS), International Business Machines Corporation (IBM), Microsoft Corporation (MSFT), Tata Consultancy Services Limited (TCS.BO), Oracle Corporation (ORCL), Bharat Petroleum Corporation Limited (BPCL.NS), Oil India Limited (OIL.NS), Oil and Natural Gas Corporation (ONGC.NS), Royal Dutch Shell Plc (RDS-B), Exxon Mobil Corporation (XOM), Sinopec Shanghai Petrochemical Company Limited (SHI), Hindustan Petroleum Corporation Limited (HINDPETRO.NS) and the stock market index datasets are S&P BSE SENSEX (BSESN), NIFTY 50 (NSEI), CAC 40 (FCHI), ESTX 50 PR.EUR (STOXX50E), KOSPI Composite (KS11), IBEX 35 (IBEX), Nikkei 225 (N225), AEX (AEX), DAX PERFORMANCE (GDAXI), IBOVESPA (BVSP), S&P/TSX Composite (GSPTSE), IPC MEXICO (MXX), SMI PR (SSMI), Dow Jones Industrial Average (DJI), HANG SENG INDEX (HSI), TSEC weighted index (TWII), NASDAQ Composite (IXIC), BEL 20 (BFX), Austrian Traded Index in EUR (ATX), Jakarta Composite Index (JKSE). The details of these datasets are listed in Table 1 and Table 2, respectively.

thumbnail
Table 1. Individual stock financial details with their stock exchanges, types and listing abbreviations.

https://doi.org/10.1371/journal.pone.0211402.t001

thumbnail
Table 2. Financial stock market index details with their stock exchanges, types and listing abbreviations.

https://doi.org/10.1371/journal.pone.0211402.t002

All computations are carried out on a PC with Windows 7 OS, with a 32 bit, 3.10 GHz Intel core i5-2400 processor with 4 GB of RAM under the MATLAB R2012b environment. This paper used the MOSEK optimization toolbox to solve the quadratic programming problem in SVR and TSVR formulations, which is taken from http://www.mosek.com [39].

All the datasets are normalized in the following manner so that each feature value lies in [0, 1]: where is the normalized value corresponding to dij and and denote the maximum and minimum values of the j-th feature of A, respectively. To measure the prediction performance, this paper considered the root mean square error (RMSE), which is given by where the total number of test samples is denoted by P, and is the predicted value corresponding to the observed values. To construct a nonlinear regressor, we use a Gaussian kernel where vector x,yRm and μ>0. The optimal parameter values of C = C1 = C2 are selected from the sets {10−5,…,105} and μ from the set {2−5,…,25} for the training using 10-fold cross validation. By using the optimal values, the whole dataset is divided into 10 equal parts at random, out of which one part is used for testing and the remaining parts for the training to obtain the computational test accuracy. Finally, to measure the prediction, the average RMSE of the test accuracies is considered.

Individual stocks datasets of company

Individual company stocks such as SBIN.NS, PNB.NS, BPCL.NS, OIL.NS, TCS.BO, HINDPETRO.NS, ONGC.NS consist of 735 closing prices, while T, INFY, AAPL, FB, CSCO, Goog, C, HSBC, IBN, RY, RBS, IBM, MSFT, ORCL, RDS-B, XOM, SHI have a total of 751 closing prices starting from 01-01-2015 to 31-12-2017. The current value is predicted by the previous five closing prices.

Linear case.

In the linear case, Table 3 shows the average RMSE for the optimal parameter values with standard deviation and the training time in seconds. Fig 1 shows the absolute prediction error of SVR and TSVR for the linear kernel on the SHI dataset. Fig 2 shows the actual and predicted values of SVR and TSVR for the linear kernel on the SHI dataset. To verify the performance of both algorithms statistically on 24 individual stock datasets, we perform a simple, nonparametric safe test, i.e., the Friedman test with the corresponding post hoc test [40]. For this, the average rank of 24 datasets for the linear case is tabulated in Table 4. The Friedman statistic [40] can be computed under the null hypothesis, as shown in Table 4.

thumbnail
Fig 1. Prediction error plots using a linear kernel on the SHI dataset.

https://doi.org/10.1371/journal.pone.0211402.g001

thumbnail
Fig 2. Predicted and actual values using a linear kernel on the SHI dataset.

https://doi.org/10.1371/journal.pone.0211402.g002

thumbnail
Table 3. Performance comparison of TSVR with SVR on individual companies’ stock datasets using a linear kernel.

RMSE is used for comparison. Time is used for the training in seconds.

https://doi.org/10.1371/journal.pone.0211402.t003

thumbnail
Table 4. Average ranks of TSVR with SVR on individual companies’ stocks using a linear and Gaussian kernel.

https://doi.org/10.1371/journal.pone.0211402.t004

where FF is distributed according to the F-distribution with (1, 23), which has the critical value 4.2793 for the level of significance α = 0.05. Here, FF is lower than the critical value, i.e., 0.6572<4.2793, so there is no significant difference between these two algorithms for the linear case.

Nonlinear case.

In the nonlinear case, Table 5 shows the average RMSE for the optimal parameter values with the standard deviation and the training time in seconds. From Table 5, we can conclude that TSVR gives better results in 19 cases out of 24 datasets in terms of average RMSE of test accuracy, which signifies the performance of TSVR in comparison to SVR in terms of prediction. Additionally, it shows the superiority of TSVR with respect to SVR in terms of computational time.

thumbnail
Table 5. Performance comparison of TSVR with SVR on individual companies’ stock datasets using a Gaussian kernel.

RMSE is used for comparison. Time is used for the training in seconds.

https://doi.org/10.1371/journal.pone.0211402.t005

Similar to linear case, for individual stocks, the Friedman statistic can be computed under the null hypothesis from Table 4, which shows that both algorithms have a similar performance: where FF is the distribution according to the F-distribution and (1,1×23) = (1, 23) is the degree of freedom. Here, 4.2793 is the critical value of F(1,23) for the level of significance at α = 0.05. Since the value of FF = 11.8632>4.2793, we reject the null hypothesis. Furthermore, we performed pairwise comparisons using the Nemenyi post hoc test of all reported methods and verified the significant difference between their average ranks by computing the critical difference (CD) at p = 0.10. The difference between their ranks should be at least .

Since the difference between the average ranks of TSVR with SVR (1.791667−1.208333 = 0.583334) is greater than 0.3358, we conclude that TSVR is significantly better than SVR for individual stock datasets. For the non-linear case, the absolute prediction error of SVR and TSVR is shown in Figs 3 and 4 for the FB and RY datasets, respectively. Additionally, the actual and predicted values of SVR and TSVR are plotted in Figs 5 and 6 for the FB and RY datasets, respectively. It can easily be observed that TSVR is in close agreement with the observed values compared to SVR.

thumbnail
Fig 3. Prediction error plots using a Gaussian kernel on the FB dataset.

https://doi.org/10.1371/journal.pone.0211402.g003

thumbnail
Fig 4. Prediction error plots using a Gaussian kernel on the RY dataset.

https://doi.org/10.1371/journal.pone.0211402.g004

thumbnail
Fig 5. Predicted and actual values using a Gaussian kernel on the FB dataset.

https://doi.org/10.1371/journal.pone.0211402.g005

thumbnail
Fig 6. Predicted and actual values using a Gaussian kernel on the RY dataset.

https://doi.org/10.1371/journal.pone.0211402.g006

Stock market index datasets

Stock market index datasets such as BSESN and HSI consist of 733 closing prices, while DJI and IXIC have 751 closing prices; the FCHI and IBEX datasets consist of 763 closing prices; the JKSE and TWII datasets consist of 724 closing prices; MXX and SSMI have 750 closing points; AEX consists of 763 closing points; ATX consists of 737 closing points; BFX consists of 762 closing points; BVSP consists of 738 closing points and GDAXI, GSPTSE, KS11, N225, NSEI, STOXX50E consist of 755, 748,728, 732, 731, 745 closing points, respectively, from 01-01-2015 to 31-12-2017. The current value is predicted by using the previous five closing prices.

Linear case.

For the linear kernel, Table 6 shows the average RMSE for the optimal parameter values with the standard deviation and the training time in seconds. We can conclude that TSVR gives better results in 13 cases out of 20 datasets in terms of average RMSE of test accuracy. Additionally, the training time of TSVR is lower than that of SVR. The Friedman statistical nonparametric post hoc test is performed on the average rank of 20 financial datasets from Table 7. The Friedman statistic [40] can be computed under the null hypothesis for the linear case: where FF is distributed according to the F-distribution with (1,19), which has the critical value 4.3807 for the level of significance α = 0.05. Here, FF is less than the critical value, so there is no significant difference between these two algorithms for the linear case. Fig 7 shows the absolute prediction error plot of SVR and TSVR for the linear kernel on the BFX dataset. Fig 8 also shows the actual and predicted values of SVR and TSVR for the linear kernel on the market stock index BFX dataset. One can easily conclude that TSVR is in close agreement with the target values compared to SVR.

thumbnail
Fig 7. Prediction error plots using a linear kernel on the BFX dataset.

https://doi.org/10.1371/journal.pone.0211402.g007

thumbnail
Fig 8. Predicted and actual values using a linear kernel on the BFX dataset.

https://doi.org/10.1371/journal.pone.0211402.g008

thumbnail
Table 6. Performance comparison of TSVR with SVR on stock market index datasets using a linear kernel.

RMSE is used for comparison. Time is used for the training in seconds.

https://doi.org/10.1371/journal.pone.0211402.t006

thumbnail
Table 7. Average ranks of TSVR with SVR on stock market index datasets using a linear and Gaussian kernel.

https://doi.org/10.1371/journal.pone.0211402.t007

Nonlinear case.

For the non-linear kernel, Table 8 shows the average RMSE for the optimal parameter value with the standard deviation and the training time in seconds. We can conclude that TSVR gives better results in 19 out of 20 datasets in terms of average RMSE of test accuracy. The training time of TSVR is less than that of SVR due to solving a pair of smaller-sized QPPs instead of a large QPP, as in the case of SVR. This shows the superiority of TSVR with respect to SVR.

thumbnail
Table 8. Performance comparison of TSVR with SVR on stock market index datasets using a Gaussian kernel.

RMSE is used for comparison. Time is used for the training in seconds.

https://doi.org/10.1371/journal.pone.0211402.t008

In the nonlinear case for different stock market index datasets, the Friedman statistic can be computed under the null hypothesis from Table 7 as: where FF is the distribution according to the F-distribution with (1,1×19) = (1,19) as the degree of freedom. Here, 4.3807 is the critical value of F(1,19) for the level of significance at α = 0.05. Since the value of FF = 81>4.3807 is rejected, we reject the null hypothesis. Similar to the previous case, we perform pairwise comparisons using the Nemenyi post hoc test for all reported methods and verify the significant critical difference between their average ranks. The difference between their ranks should be at least at p = 0.10.

Since the difference between the average ranks of TSVR with SVR (1.95−1.05 = 0.90) is greater than 0.3678, we conclude that TSVR is significantly better than SVR for stock market index datasets. For the non-linear case, the absolute prediction error of SVR and TSVR is shown in Figs 9, 10 and 11 for the BVSP, DJI and IXIC datasets, respectively. The actual and predicted values of SVR and TSVR are plotted in Figs 12, 13 and 14 for the BVSP, DJI and IXIC datasets, respectively. It can easily be observed from these figures that TSVR is in close agreement with the desired output in comparison to SVR, which clearly demonstrates the applicability and usefulness of TSVR.

thumbnail
Fig 9. Prediction error plots using a Gaussian kernel on the BVSP dataset.

https://doi.org/10.1371/journal.pone.0211402.g009

thumbnail
Fig 10. Prediction error plots using a Gaussian kernel on the DJI dataset.

https://doi.org/10.1371/journal.pone.0211402.g010

thumbnail
Fig 11. Prediction error plots using a Gaussian kernel on the IXIC dataset.

https://doi.org/10.1371/journal.pone.0211402.g011

thumbnail
Fig 12. Predicted and actual values using a Gaussian kernel on the BVSP dataset.

https://doi.org/10.1371/journal.pone.0211402.g012

thumbnail
Fig 13. Predicted and actual values using a Gaussian kernel on the DJI dataset.

https://doi.org/10.1371/journal.pone.0211402.g013

thumbnail
Fig 14. Predicted and actual values using a Gaussian kernel on the IXIC dataset.

https://doi.org/10.1371/journal.pone.0211402.g014

Conclusion

In this paper, support vector regression and twin support vector regression formulations are discussed in detail and applied to an individual companies’ stock indices in the area of information technology industries, banking, oil, and petroleum industry and stock market index datasets of different countries to predict stock prices. Here, a pair of smaller sized QPPs is solved instead of a single large sized QPP, as in the case of SVR, thus yielding a reduction in the cost of the system. To verify the effectiveness of TSVR, we performed numerical experiments for both linear and Gaussian kernels on financial time series datasets. In experimental results, TSVR shows better learning speed for both linear and Gaussian kernels with the ability to predict having a better generalization ability than SVR. In fact, the computation time of the TSVR is approximately four times lower than the standard SVR in terms of learning speed, which clearly indicates its existence and usability. In future work, a new model that is able to handle noise and outliers for predicting the prices of stock indices can be explored.

References

  1. 1. Vapnik VN. (2000). The nature of statistical learning theory, 2nd ed., Springer, New York
  2. 2. Osuna E, Freund R, Girosi F. (1997). Training support vector machines: An application to face detection, in Proceedings of Computer Vision and Pattern Recognition, 130–136
  3. 3. Huang C, Davis LS, Townshed JRG. 2002. An assessment of support vector machines for land cover classification. International Journal of Remote Sensing, 23, 725–749.
  4. 4. Joachims T. (1998). Text categorization with support vector machines: learning with many relevant features, In: European Conference on Machine Learning No.10, Chemnitz, Germany, 137–142
  5. 5. Brown MPS, Grundy WN, Lin D, Cristianini N, Sugnet CW, Furey TS, et al. (2000), Knowledge-based analysis of microarray gene expression data using support vector machine, Proceedings of the National Academy of Sciences of USA, 97(1), 262–267
  6. 6. Guyon I, Weston J, Barnhill S, Vapnik V. (2002). Gene selection for cancer classification using support vector machine, Machine Learning, 46, 389–422
  7. 7. Mukherjee S, Osuna E, Girosi F. (1997). Nonlinear prediction of chaotic time series using support vector machines, In: NNSP’97: Neural Networks for Signal Processing VII: in Proc. of IEEE Signal Processing Society Workshop, Amelia Island, FL, USA, 511–520
  8. 8. Muller KR, Smola AJ, Ratsch G, Schölkopf B, Kohlmorgen J. (1999). Using support vector machines for time series prediction, In: Schölkopf B, Burges CJC, Smola AJ (Eds.), Advances in Kernel Methods- Support Vector Learning, MIT Press, Cambridge, MA, 243–254
  9. 9. Lin F, Yeh C, Lee M. (2011). The use of manifold learning and support vector machines in the prediction of business failure, Knowledge-Based Systems, 24(1), 95–101
  10. 10. Boser BE, Guyon IM, Vapnik VN. (1992). A training algorithm for optimal margin classifiers. In proceedings of the Annual Conference on Computational Learning Theory, Haussler D, Ed., ACM Press, Pittsburgh, PA, pp. 144–152
  11. 11. Joachims T. (1999). Making large-scale SVM learning practical. In Advances in Kernel Methods—Support Vector Learning, Sch¨olkopf B, Burges CJC, Smola AJ, Eds., MIT Press, Cambridge, MA, pp. 169–184,
  12. 12. Platt J. (1999). Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods—Support Vector Learning, Sch¨olkopf B., Burge C.J.C., and Smola A.J., Eds., pp. 185–208, MIT Press, Cambridge, MA
  13. 13. Achlioptas D, McSherry F, Sch¨olkopf B. (2002). Sampling techniques for kernel methods, In Advances in Neural Information Processing Systems 14, Dietterich T. G., Becker S., and Ghahramani Z., Eds., MIT Press, Cambridge, MA.
  14. 14. Fine S, Scheinberg K. (2001). Efficient SVM training using low-rank kernel representations, Journal of Machine Learning Research, 2:243–264
  15. 15. Tsang IW, Kwok JT, Cheung PM. (2005). Very large SVM training using core vector machines. In proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, Barbados.
  16. 16. Suykens JAK, Vandewalle J. (1999). Least squares support vector machine classifiers. Neural Processing Letters, 9(3), 293–300.
  17. 17. Suykens JAK, Lukas L, Van DP, Moor BD, Vandewalle J. (1999). Least squares support vector machine classifiers: a large scale algorithm, European Conference on Circuit Theory and Design, (ECCTD’99), Stresa Italy, pp.839-842.
  18. 18. Mangasarian OL, Wild EW. (2006). Multisurface Proximal Support Vector Classification via Generalized Eigenvalues, IEEE Transaction on Pattern Analysis and Machine Intelligence, 28(1), 69–74.
  19. 19. Jayadeva KR, Chandra S. (2007). Twin support vector machines for pattern classification, IEEE Transaction on Pattern Analysis and Machine Intelligence (TPAMI), 29, 905–910.
  20. 20. Peng X (2010a). TSVR: An efficient twin support vector machine for regression, Neural Networks, 23(3), 365–372
  21. 21. Emad WS, Danil VP, and Donald CW. (1998). Comparative study of stock trend prediction using time delay, recurrent and probabilistic neural networks, IEEE Trans. on Neural Networks, 9(6): 1456–1470. pmid:18255823
  22. 22. Cao L, Tay FEH. (2001) Financial forecasting using support vector machines, Neural Computing and Application, 10: 184–192.
  23. 23. Kuo RJ, Chen CH, Hwang YC. (2001) An intelligent stock trading decision support system through integration of genetic algorithm based fuzzy neural network and artificial neural network, Fuzzy Sets and Systems, 118(1): 21–45.
  24. 24. Prasad M, Li DL, Lin CT, Singh J, Prakash S. (2015). Designing Mamdani-Type Fuzzy Reasoning for Visualizing Prediction Problems Based on Collaborative Fuzzy Clustering. IAENG International Journal of Computer Science, 42 (4).
  25. 25. Singh J, Prasad M, Prasad OK, Er MJ, Saxena A, Lin CT. (2016). A Novel Fuzzy Logic Model for Pseudo Relevance Feedback based Query Expansion. International Journal of Fuzzy Systems, 18 (6), 980–989.
  26. 26. Prasad M, Liu YT, Li DL, Lin CT, Shah RR, Kaiwartya OP. (2016). A New Mechanism for Data Visualization with TSK-type Preprocessed Collaborative Fuzzy Rule based System. Journal of Artificial Intelligence and Soft Computing Research.
  27. 27. Prasad M, Lin YY, Lin CT, Er MJ. (2015). A New Data-Driven Neural Fuzzy System with Collaborative Fuzzy Clustering Mechanism. Neurocomputing, 167, 558–568, 2015.
  28. 28. Lin CT, Prasad M, Saxena A. (2015). An Improved Polynomial Neural Network Classifier Using Real Coded Genetic Algorithm. IEEE Transaction on Systems, Man and Cybernetics: Systems, 45 (11), 1389–1401.
  29. 29. Prasad M, Lin CT, Hong CT, Chang JY. (2017). Soft Boosted Self Constructive Neuro Fuzzy Inference Network. IEEE Transaction on Systems, Man and Cybernetics: Systems, 47 (3), 584–588.
  30. 30. Kim KJ, Han I. (2000). Genetic algorithms approach to feature disctetization in artificial neural networks for the prediction of stock price index, Expert Systems with Applications, 19: 125–132.
  31. 31. Hassan MR, Nath B. (2005). Stock market forecasting using hidden Markov model: a new approach, Proc. 5th International Conference on Intelligent Systems Design and Applications, 192–196.
  32. 32. Fama EF, Kenneth RF. Dividend yields and expected stock returns. Journal of financial economics22.1 (1988): 3–25.
  33. 33. Lewellen J. "Predicting returns with financial ratios." Journal of Financial Economics 74.2 (2004): 209–235.
  34. 34. Goh JC, Jiang F, Tu J, Wang Y. Can US economic variables predict the Chinese stock market?. Pacific-Basin Finance Journal 22 (2013): 69–87.
  35. 35. Shen D, Zhang Y, Xiong X, Zhang W. Baidu index and predictability of Chinese stock returns. Financial Innovation (2017) 3:4.
  36. 36. Xiao L, Shen D, Zhang W. "Do Chinese internet stock message boards convey firm-specific information?" Pacific-Basin Finance Journal 49 (2018): 1–14.
  37. 37. Pissarenko D. (2002). Neural networks for financial time series prediction: Overview over recent research BSc (Hones) Computer Studies Thesis, University of Derby in Austria. <http://citeseer.ist.psu.edu/pissarenko02neural.html>.
  38. 38. http://finance.yahoo.com
  39. 39. http://www.mosek.com
  40. 40. Demšar J. Statistical comparisons of classifiers over multiple data sets. Journal of Machine learning research 7.Jan, pp. 1–30 (2006).