Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A novel decision ensemble framework: Attention-customized BiLSTM and XGBoost for speculative stock price forecasting

  • Riaz Ud Din,

    Roles Conceptualization, Data curation, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Artificial Intelligence Lab, Department of Computer Systems Engineering, University of Engineering and Applied Sciences (UEAS), Swat, Pakistan, Department of Computer Systems Engineering, University of Engineering and Technology (UET), Peshawar, Pakistan

  • Salman Ahmed,

    Roles Project administration, Supervision

    Affiliations Department of Computer Systems Engineering, University of Engineering and Technology (UET), Peshawar, Pakistan, Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Science and Technology, Topi, Swabi, Pakistan

  • Saddam Hussain Khan ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Supervision, Writing – review & editing

    saddamhkhan@ueas.edu.pk (SHK), balkhamees@ksu.edu.sa (BA)

    Affiliation Artificial Intelligence Lab, Department of Computer Systems Engineering, University of Engineering and Applied Sciences (UEAS), Swat, Pakistan

  • Abdullah Albanyan,

    Roles Formal analysis, Validation, Writing – review & editing

    Affiliation College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia

  • Julian Hoxha,

    Roles Investigation, Writing – review & editing

    Affiliation College of Engineering and Technology, American University of the Middle East, Egaila, Kuwait

  • Bader Alkhamees

    Roles Resources, Visualization, Writing – review & editing

    saddamhkhan@ueas.edu.pk (SHK), balkhamees@ksu.edu.sa (BA)

    Affiliation Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia

Abstract

Forecasting speculative stock prices is essential for effective investment risk management and requires innovative algorithms. However, the speculative nature, volatility, and complex sequential dependencies within financial markets present inherent challenges that necessitate advanced techniques. In this regard, a novel framework, ACB-XDE (Attention-Customized BiLSTM-XGB Decision Ensemble), is proposed for predicting the daily closing price of speculative stock Bitcoin-USD (BTC-USD). The proposed ACB-XDE framework integrates the learning capabilities of a customized Bi-directional Long Short-Term Memory (BiLSTM) model with a novel attention mechanism and the XGBoost algorithm. The customized BiLSTM leverages its learning capabilities to capture complex sequential dependencies and speculative market trends. Meanwhile, the new attention mechanism dynamically assigns weights to influential features based on volatility patterns, thereby enhancing interpretability and optimizing effective cost measures and volatility forecasting. Moreover, XGBoost handles nonlinear relationships and contributes to the proposed ACB-XDE framework’s robustness. Furthermore, the error reciprocal method improves predictions by iteratively adjusting model weights based on the difference between theoretical expectations and actual errors in the individual attention-customized BiLSTM and XGBoost models. Finally, the predictions from both the XGBoost and attention-customized BiLSTM models are concatenated to create a varied prediction space, which is then fed into the ensemble regression framework to improve the generalization capabilities of the proposed ACB-XDE framework. Empirical validation of the proposed ACB-XDE framework involves its application to the volatile Bitcoin market, utilizing a dataset sourced from Yahoo Finance (Bitcoin-USD, 10/01/2014 to 01/08/2023). The proposed ACB-XDE framework outperforms state-of-the-art models with a MAPE of 0.37%, MAE of 84.40, and RMSE of 106.14. This represents improvements of approximately 27.45%, 53.32%, and 38.59% in MAPE, MAE, and RMSE respectively, over the best-performing attention-BiLSTM. The proposed ACB-XDE framework presents a technique for informed decision-making in dynamic financial landscapes and demonstrates effectiveness in handling the complexities of BTC-USD data.

1. Introduction

In today’s information-driven era, the stock market remains a global economic epicenter with far-reaching impacts on commerce, industry, and society. Traditional stock markets, characterized by established exchanges where shares of publicly traded companies are bought and sold, are integral to economic health and individual investment strategies. Predictive models have become essential tools in the financial industry to improve risk management strategies and financial decisions [1]. Speculative stocks like Bitcoin are complex and dynamic because they are inherently volatile and unpredictable. According to specific estimates (August 2024), there exist more than 13,000 (thirteen thousand) cryptocurrencies [2] with an estimated total market value of USD 2.32 trillion [3]. The share of Bitcoin is 54.8% with a value of USD 1.3 trillion. Keeping in view the market realities and trends, Bitcoin's price is highly fluctuating as per supply-demand situation, investors’ priorities, and the global economic situation [4].

Most investors depend on technical insights and market analysis to forecast and make trade-related decisions. The focus of the technical analysis is on historical price trends and data rather than market price dynamics. Indicators like moving averages are commonly used to forecast price mobility. On the other hand, fundamental perceptions were based on supply-demand factors and reliance on company reports and balance sheets. Despite their intensive use, these methods proved limited in their adaptability and compatibility to the changing nature of both conventional and predictive markets [5]. Classical time series and regression analyses emerged as viable alternative solutions to effectively address these shortfalls [6]. Different methods are used to analyze time series data and get useful statistical information, i.e., the autoregressive integrated moving average (ARIMA) model, which is used to analyze and forecast time series [7]. Though the model (ARIMA) is useful for analyzing short-term to medium-term prices, it faces challenges with the complex and non-linear pattern often prevalent in the stock markets [8].

As a result, practical and learning-oriented approaches, using machine learning (ML) like Support Vector Machine (SVM) [9] can effectively address such limitations posed by ARIMA models [10]. ML is computationally efficient enough to process and evaluate real-time stock market data. It has displayed significant effectiveness in capturing the complex nature of financial markets. Which are characterized by dynamic and multiple interactions among different elements influencing stock prices. However, ML methods are also facing difficulty with large and complex datasets [9]. Deep learning (DL), being a subset of ML, has significant improvements over conventional ML techniques [11]. The technique encompasses knowledge acquisition at different steps of description and interpretation, thus providing a more inclusive understanding of the data [9]. DL is capable of capturing complex interactions and forecasting price fluctuation in terms of historical and speculative stocks [12]. Owing to this, DL has played an important role in a variety of fields such as cancer diagnosis [13,14], detection of viral infection [1517], cybersecurity [18,19], and intelligent transportation [20,21].

Recurrent Neural Networks (RNN) especially Long Short-Terms Memory (LSTM), associated with DL, emerge with features engineering capabilities. The models are efficient in capturing long-range dependent factors as well as temporal trends in financial time series data, thus effectively addressing memory issues [22]. Moreover, the combination of LSTM with the ML algorithm addresses correlated data extra features and improves the accuracy of stock price trends [23].

The hybrid decomposition-reconstruction model, which combines RNN with gated recurrent units (GRUs), variational modal decomposition (VMD), and sample entropy (SE), has shown significant effectiveness [2427]. The success of hybrid models is further exemplified by the EMD-BiLSTM model, which significantly enhances forecasting accuracy. Moreover, the integration of an attention mechanism into the BiLSTM model further improves accuracy from 58.50% to 71.26% [28,29]. These developments highlight the potential of innovative model architectures and integration strategies in advancing forecasting methodologies. Moreover, hybrid methods of deep learning architectures such as CNN BiLSTM models in financial derivatives exhibit the benefits of merging the convolutional neural networks with BILSTM. CNN highlights the salient features, while BILSTM focuses on sequential dependencies. With this method, the accuracy is enhanced in various metrics [30]. The latest research demonstrates the BILSTM models’ efficacy in non-financial forecasting applications especially solar irradiation prediction as it captures both short and long-duration dependencies in a changing data scenario, resulting in more predictive accuracy [31].

In speculative stocks, the study focuses on Bitcoin price prediction, highlighting the performance of the LSTM-BTC model and its uncertainties regarding future data and generalizability to other cryptocurrencies [32]. Comparative studies using random forest regression and LSTM provide valuable insights into improving Bitcoin price prediction methodologies [33]. The importance of sample dimensions in ML algorithms for accurate Bitcoin price prediction is emphasized, with Logistic Regression and XGBoost achieving specific accuracies [34]. Bitcoin price prediction further improved with XGBoost-selected features compared to random forest (RF)--selected features [35]. Another study explores the optimization of DL models for various cryptocurrencies and particularly evaluates the performance of RNN variants such as LSTM, GRU, and BiLSTM for major cryptocurrencies [36]. While the BiLSTM has demonstrated its effectiveness in sequence modeling, it encounters difficulties when employed for stock price prediction. A significant issue lies in its uniform weight assignment to input features, neglecting the diverse levels of importance that impact stock prices.

Addressing the limitations of prior research, particularly those associated with daily trading volume and closing prices, this study proposes adopting ensemble techniques. The challenges inherent in daily trading volume and closing prices, encompassing volatility, non-linearity, external dependencies, noise, lack of clear patterns, and data quality issues, accentuate the necessity for resilient ensemble approaches to navigate the intricacies of financial market data adeptly.

The current work uses an innovative framework called ACB-XDE, which is proposed to address the Bitcoin-associated complexities in price forecasting. The framework combines an attention-customized BiLSTM with the XGBoost Algorithm to predict the daily prices of Bitcoin. It captures complex sequential dependencies and trends while the XGBoost algorithm fine-tunes predictions for improving generalization functions and performance.

The proposed ACB-XDE framework is tailored to provide a user-friendly solution for investors and financial experts through simplification of the process of Bitcoin price forecasting. It enables an individual with limited expertise to benefit from its accurate and credible prediction by harnessing advanced DL techniques. The attention system of BiLSTM constantly assigns weight to the most significant features, such as daily closing prices and daily trading volume, making the model more reliable and interpretable for users. Moreover, the iterative refinement process in XGBoost also improves prediction accuracy for providing concise, timely, and useful information.

1.1. Our contributions are as follows

  1. 1. The proposed ACB-XDE framework introduces an innovative approach for analyzing complex Bitcoin daily price trends by integrating an attention-customized BiLSTM with a novel attention block. This design enhances predictive performance through the refined capabilities of a modified XGBoost algorithm, offering superior trend prediction accuracy.
  2. 2. The attention-customized BiLSTM can capture complex sequential dependencies and trends in predictive market data. It also dynamically assigns weights to significant features while the new attention block improves the learning capability of the Attention-customized BiLSTM by allocating greater weightage to the daily closing of prices and daily trading volume.
  3. 3. The error reciprocal method is used iteratively to refine attention-customized BiLSTM and XGBoost prediction, thus enhancing its performance by overcoming discrepancies between theoretical expectations and actual errors.
  4. 4. Finally, the predictions are achieved through an ensemble learning approach, which integrates the individual predictions from XGBoost with attention-based customized BiLSTM models. This systematic integration enhances the diversity of predicted prices, thereby enhancing the improved generalization capabilities of the proposed ACB-XDE framework.
  5. 5. Empirical validation of the proposed ACB-XDE framework on the BTC-USD dataset exhibits improved performance, thus effectively addressing complexities and volatility reported in Bitcoin prices. Additionally, it also optimizes cost measures and shows excellent performance in recent most renowned techniques.

The forthcoming research is organized as follows: the proposed ACB-XDE framework is thoroughly explained in the Proposed ACB-XDE Framework section, which comes next. The Experimental Setup section details of the experimental procedure. The Results section presents an analysis of the proposed framework and a comparison with state-of-the-art models. Finally, the conclusion section encapsulates the study’s conclusion and potential future directions.

2. Proposed ACB-XDE framework

This paper introduces a novel DL framework, ACB-XDE, to improve the prediction of Bitcoin daily prices, which are highly volatile and complex. The proposed framework combines four distinct techniques: an attention-customized BiLSTM with an additional new attention block, an XGBoost model, a strategic error reciprocal weighting scheme, and an ensemble approach. The new attention block improves the learning capability of the attention-customized BiLSTM by giving more weight to daily closing prices and volumes. This section explains the training processes for each model, how the error reciprocal weighting is calculated, and the ensemble prediction method. Moreover, the proposed ACB-XDE framework is tested on the speculative Bitcoin dataset and compared with the best current techniques, as shown in Fig 1. Additionally, scaling is applied as preprocessing to normalize the dataset.

2.1. Ensemble of attention-customized BiLSTM and XGBoost

The ACB-XDE framework is a prediction model that combines deep learning, BiLSTM, and machine learning ensemble techniques, XGBoost. It uses a customized BiLSTM with a new attention mechanism to capture complex sequential patterns and spot market trends. This framework efficiently captures the time-based relationships in the data to improve trend detection. The addition of an XGBoost module helps improve the stability and generalizability of ACB-XDE by dealing with nonlinearities and avoiding overfitting. The error reciprocal method analyzes the predictions from the attention-customized BiLSTM and XGBoost and assigns greater weights to models with lower error rates. This weighted integration mitigates the complex limitations of individual models and offers a robust framework for tackling the multifaceted challenges of the dynamic and speculative Bitcoin market. The proposed ACB-XDE framework’s general and detailed overview is visually represented in Figs 2 and 3 respectively capitalizing on the strengths of both individual components, providing a powerful and comprehensive foundation for improved Bitcoin price forecasting.

thumbnail
Fig 2. Brief Overview of the proposed Bitcoin prediction ACB-XDE framework.

https://doi.org/10.1371/journal.pone.0320089.g002

2.2. Weighting method

This paper employs the error reciprocal method for weight assignment to enhance the predictive accuracy of the proposed ACB-XDE framework. The error reciprocal method plays a pivotal role in elevating prediction performance. It controls, minimizes, and optimizes the model with larger errors, which are generated by significant deviations between predicted values and actual values, by assigning inverse weight to these errors. Within the proposed ACB-XDE framework, the error reciprocal method assigns greater weights to models with smaller errors, thereby reducing the overall prediction error. The weights are calculated based on error outcomes from the primary evaluation metric, MAPE, as expressed in the following formula:

(1)(2)(3)

represents the final prediction. Wbl(t) and denote the weighted values and predicted values of attention-customized BiLSTM respectively. While Wxg(t) and represent the weighted and predicted values of XGBoost, respectively. The weight calculations are based on formulas (2) and (3), where the error values for the new attention BiLSTM and XGBoost are represented by the variables and , respectively.

2.3. BiLSTM prediction model

The paper focuses on predicting Bitcoin (BTC) prices, proposing a novel ACB-XDE framework. The sample datasets are sourced from Yahoo Finance and encompass BTC-USD exchange rates. BiLSTM is selected because of its excellent capacity to identify patterns in sequential data and efficiently capture complex sequential dependencies, which perfectly matches the dynamic nature of the Bitcoin price [37].

In time-series financial forecasting, BiLSTM is widely used due to its effectiveness. It has also been effectively used in other non-financial domains, such as solar irradiation prediction. BiLSTM outperforms traditional approaches in the study of solar irradiation, considering both short- and long-term dependencies on structural data [31]. Additionally, integration of BiLSTM with convolutional layers and autoencoders has been shown to further enhance accuracy by extracting important hidden features from the input data and it also resolves gradient instability issues [30]. The proposed novel attention-customized BiLSTM in the ACB-XDE framework builds upon these principles by integrating novel components to enhance the model’s performance:

2.3.1. Embedding layer.

The embedding layer is a fundamental component responsible for transforming raw input features such as daily closing price, daily opening price, daily trading volume, daily high, daily low, and date into a continuous vector space. The input features start as one-dimensional arrays and get converted into dense, low-dimensional vectors by the embedding layer. This modification improves the model’s ability to predict stock prices by assisting it in comprehending patterns and temporal dependencies in the data. The parameters of the embedding layer are learned along with other neural network parameters during training using backpropagation. This allows the model to adjust dynamically to the input data and pull out important features for accurate forecasting. The output from the embedding layer then goes into the next LSTM layer.

2.3.2. LSTM layer.

Recurrent neural networks (RNNs) are effective tools in the management of sequential data but their effectiveness depends more on the selection of time series range. If time series are long enough, the gradients will become smaller, thus making the learning process difficult for the framework. However, if time steps are short enough, the gradient becomes significantly large and leads to exploding gradients. Due to this phenomenon model parameters grow exponentially during backpropagation thus causing instability and preventing convergence [38]. LSTM models cater to this deficiency by solving the vanishing gradient problem through a gating system encompassing input, output, and forget gates as shown in Fig 4. These gates control the flow of information within the network and mitigate the vanishing gradient problem. The input gate controls the infusion of new information, the forget gate decides what to discard from the concealed state. While the output gate determines what information to pass on to the next layers. The LSTM layer can effectively learn long-term dependencies inside sequences due to the complex interplay between the gates. The hidden state, which connects the previous time step (Ht − 1) with the current time step (Xt), provides the input for the LSTM gate. Subsequently, a fully connected layer is used to calculate the LSTM’s output (6).

(4)(5)(6)(7)(8)

‘Pht−1’ represents the hidden state from the preceding time step, while the small set input at a given time step ‘t’ is shown by ‘St’. The number of the hidden state is marked as ‘hs’. The terms ‘WMxn’ & ‘WnIg’ are the weight matrices of the input gate, while ‘σ’ is the sigmoidal function. Also, the term ‘dn’ is the offset term for the input gate. In addition, the weight matrices assigned to the forgetting gate are denoted by the symbols ‘Wxf’ and ‘WnFg’ in this architecture, while the associated offset term is represented by the symbol ‘df’. The terms ‘WMxc’ and ‘WnCm’ are used as weight matrices for the gated unit, while ‘dc’ stands for the related offset term. The term ‘Cm’ is introduced to describe the candidate memory cells. Furthermore, ‘Cmt’ designates the cell state for the current time step, while ‘Cm(t−1)’ represents the cell state for the previous time step. Finally, ‘do’ is the offset term that corresponds to the defined weight matrices ‘Wog’, which are linked to the output gate. The information flow in the hidden state is regulated by the multiplication of elements (x) and using the activation function (tan h) with a value range of [-1,1]. The output gate Og handles the flow of information from the memory cell to the hidden state and the final enriched output Fo is denoted (Eq. 9) and Fig 5 illustrates the components’ connection topology of BiLSTM [30].

(9)

The proposed ACB-XDE framework employs a specialized feature extraction method, leveraging the comprehensive information within the data to capture insights from both forward and backward perspectives. The outcomes of this two-way extraction are harmoniously combined and summarized in two dimensions. The inherent influence of the order of inputs on a single LSTM is mitigated by strategically merging the data and therefore enhancing the overall comprehensiveness of the outputs. This methodological foundation serves as a robust foundation and effectively tackles challenges related to gradient instability and sequential feature extraction in speculative stock price prediction endeavors.

2.3.3. Attention mechanism of BiLSTM.

The new attention mechanism further refines the traditional BiLSTM to enhance the framework’s ability to capture critical market signals. The BTC-USD dataset often contains subtle features and patterns with varying degrees of importance. BiLSTM’s attention mechanism dynamically allocates weights to significant features and as a result, improves interpretability and optimizes cost measures and volatility forecasts [39]. The foundational concept behind the attention mechanism is inspired by human attention dynamics. In human information processing, attention is selectively focused on key elements rather than uniformly distributed across all information. Integrating the attention mechanism into prediction models mirrors this cognitive approach and enables the assignment of distinct weights to data. This dynamic allocation mitigates the undue influence of certain input data on the output, amplifying the significance of pivotal information. Within the customized BiLSTM model, two pivotal attention strategies emerge. Notably, the attention gate replaces the conventional forget gate seen in traditional LSTM models as illustrated in Fig 6. This gate exclusively attends to historical cell states, untied from the current input, leading to a notable reduction in overall training parameters. Additionally, the BiLSTM model employs an attention-weighting method on the model’s output. This strategic application allows for the precise identification and utilization of the most crucial and influential information [29].

thumbnail
Fig 6. Traditional LSTM vs BiLSTM with an attention mechanism.

https://doi.org/10.1371/journal.pone.0320089.g006

2.4. New attention block

A newly employed attention mechanism enhances the learning capability of the proposed framework by allocating significant weight to daily closing price and daily trading volume. These indicators are strongly correlated with market trends and investor sentiment, and they reflect the critical aspects of market activity such as price movements and trading liquidity. The daily closing price encapsulates all intraday fluctuations, reflecting the asset’s final trading value for the day, while the daily trading volume indicates the number of shares, highlighting market interest and liquidity. Emphasizing these indicators allows the model to capture direct signals of market trends and potential price movements. Empirical studies have shown that daily closing price and trading volume are reliable predictors of future price movements. Allocating larger weight to these features leverages their predictive power, leading to more accurate forecasts. Market data includes various features that might introduce noise into the model. The new attention mechanism reduces the impact of less significant data by focusing on the most relevant features, namely daily closing price and daily trading volume. This approach enhances the model’s clarity and predictive accuracy. Moreover, the dynamic nature of the attention mechanism allows it to adjust the weight of daily closing price and daily trading volume in response to changing market conditions. This adaptability ensures that the model remains robust and accurate, even in volatile market environments. The detail of the newly employed attention mechanism block is illustrated in Fig 7. Wf demonstrates the all-input feature and Wfeatures is the features-weightage coefficient at the range of [0, 1] (Equation (10)). The output XCA_out highlights the price pattern while suppressing the irrelevant features. In Equations (11) and (12), σ1 and σ2 is the Relu and Sigmoid activation functions, respectively. While bCA and bf are biases, and Wv, Wp, and f are the linear transformation.

(10)(11)(12)

2.5. XGBoost forecasting model

In the proposed ACB-XDE framework, XGBoost is used for robust gradient-boosted decision tree implementation. XGBoost excels in handling diverse data types, managing nonlinear relationships, and preventing overfitting. XGBoost ensures model stability and generalizability thereby addressing the limitations of over-reliance on sequential information and enhancing model interpretability. Furthermore, XGBoost’s unique ability to combine weak learners facilitates the capture of complex dynamic patterns, contributing to more accurate predictions even during significant market shifts [40]. XGBoost represents a refined instantiation of the gradient-boosting decision tree paradigm, distinguished by its ability to improve predictive speed and efficiency. XGBoost employs a meticulously crafted decision tree constructed iteratively by adding trees and progressively partitioning features. This method helps the model better capture complex patterns in the Bitcoin price data [41]. It involves creating a new function, f(x), to represent the residual error from the previous prediction as part of the incremental integration process. Once all of the k trees in the training process are used, each tree converges to a different leaf node, and every leaf node has a unique score. The final prediction for a specific sample is achieved by summing the scores from all the contributing trees. The formal statement of the XGBoost is as follows:

(13)

Herein, the forecast value is indicated by , sample data is indicated by While n indicates the total number of trees and indicates the weight.

A new tree is slowly added to the existing tree structure in each iteration cycle to model the residual disparity between the results and the predictions of the previous tree. The following Equation (14) is an explanation of the iterative process while Equations (15) and (16) state the objective function of XGBoost. is the regularization term in Equations (15) and (16).

(14)(15)(16)

represents the model after t training rounds, signifies the retained function from the earlier round, whereas is the recently introduced function. It is important to find ft that minimizes the chosen target function. Equation (15) has a regularization term called , which affects the objective function’s tree’s complexity. Improved generalization ability and decreased complexity are correlated with a lower value of . Equation (16) shows that L is the number of leaf nodes, ω is the score awarded to a leaf node, γ controls the number of leaf nodes, and λ limits the scores of leaf nodes to avoid unnecessarily high values.

Second-order expansion of Taylor at ft = 0 is used to determine ft that minimizes the goal function. Approximation resulting from the objective function is expressed as:

(17)

Here, , and is the first and second derivatives respectively. These elements are immediately eliminated because the residual error of p and the prediction scores from the original t-1 tree have no bearing on the optimization of the objective function. As a result, more simplification of the goal function produces:

(18)

Equation (18) aggregates the values of the loss function for every sample, hence simplifying the objective function. Then, to simplify and rephrase the goal function, samples belonging to the same leaf node are rearranged using Equation (19). The steps involved are as follows:

(19)

Hence, by reformulating the previously provided formulas, the objective function transforms into a univariate quadratic function centered on the leaf node fraction ω. This transformation enables the utilization of the vertex formula to readily ascertain the optimal ω and its corresponding value for the objective function. The optimal ω j * and related objective function values for the recalibrated univariate quadratic function centered around the leaf node fraction ω can be obtained in the following manner:

(20)(21)(22)

For effective computations and adherence to data input specifications, data normalization is a prerequisite. Cryptocurrency data is normalized using the formula [19], effectively confining the data within the [0, 1] range. In the above formula [21], v and , respectively denote the stock data value before and after normalization. and represent the minimum and maximum values of the stock data before normalization.

2.6. Models training and testing

Initially, the attention-customized BiLSTM and XGBoost models undergo training and testing individually, and then their respective predictions are combined into an ensemble model, as illustrated in Fig 8. The prediction method consists of four distinct stages, explained as follows:

thumbnail
Fig 8. Stock price prediction methodology in the proposed ACB-XDE framework.

https://doi.org/10.1371/journal.pone.0320089.g008

2.6.1. Stage 1: Data preprocessing.

The initial stage entails comprehensive data preprocessing to enhance generalization. Selecting key features, like daily opening price, daily maximum and minimum price, daily trading volume, and daily closing price, is crucial at this stage. This meticulous selection ensures that the data is well-prepared for subsequent analysis. The culmination of the preprocessing stage involves the normalization of the input data to ensure that the input data is appropriately scaled for further analysis.

2.6.2. Stage 2: Models prediction.

In the second stage, the new attention mechanism and the customized BiLSTM model are utilized to predict Bitcoin prices. The new attention mechanism focuses on daily trading volume and daily closing price, thus capturing critical market signals. This strategic emphasis can indicate unsustainable trends, such as a declining price coupled with increasing or stagnant trading volume, suggesting a transient downturn. One of the most important indicators of market sustainability is the direct relationship between daily closing price and daily trading volume. The predictions from the new attention mechanism and the customized BiLSTM model are then combined to obtain a consolidated result. This ensemble approach leverages the strengths of each method, combining the market signal detection capabilities of the attention mechanism with the sequence learning capabilities of BiLSTM. Additionally, the XGBoost model is used independently for Bitcoin price prediction. XGBoost gradient boosting framework effectively handles overfitting through regularization techniques such as L1 and L2 regularization.

2.6.3. Stage 3: Assigning weights to models and ensemble models.

In stage 3, the error reciprocal approach is used to assign weights to the predictions made by the new attention-customized BiLSTM and XGBoost models based on their projected errors. This approach ensures that each model is provided with an appropriate weight and their results are ensembled to acquire the final Bitcoin price prediction. The results are compared with state-of-the-art models for assessing the performance of the proposed framework.

2.6.4. Stage 4: Evaluating prediction performance.

The performance efficiency of the ACB-XDE framework is assessed through a comparison of its prediction errors with six state-of-the-art models. The improvement in Bitcoin price prediction is observed in this step.

3. Experimental setup

This section describes the methodology, data collection, preprocessing, model training, and hardware configuration for the experimental setup used to assess the ACB-XDE framework.

3.1. Evaluation methodology

Python-based simulations are employed to conduct a comprehensive evaluation of various forecasting techniques. The proposed ACB-XDE framework is compared against established state-of-the-art models, including the attention-customized BiLSTM, XGBoost, LSTM, attention-LSTM, BiLSTM, and attention-BiLSTM.

3.2 Data acquisition and preprocessing

3.2.1. Data source.

A comprehensive assessment of data from various sources shows that Yahoo Finance offers the most current and reliable dataset for evaluating the ACB-XDE framework. The chosen dataset encompassed the following features: date, daily opening price, daily maximum price, daily minimum price, daily trading volume, and daily closing price. The structure of the data is illustrated in Fig 9.

The dataset is publicly available on Yahoo Finance and the authors accessed it through the publicly accessible link given below:

https://finance.yahoo.com/quote/BTC-USD/history/?period1=1410912000&period2=1556053200&interval=1d&filter=history&frequency=1d;

To further simplify access without restrictions, we have also made the dataset available on GitHub at the following link:

https://raw.githubusercontent.com/itsriaz/PricePrediction/refs/heads/main/Dataset1.csv

https://raw.githubusercontent.com/itsriaz/PricePrediction/refs/heads/main/Dataset2.csv

3.2.2. Data cleaning, handling missing values, and outlier detection.

The dataset is obtained from Yahoo Finance and it goes through extensive examination to determine its accuracy and completeness. The dataset is complete and doesn’t need any techniques such as forward filling to fill any missing values. Key summary statistics such as count, mean, standard deviation, and percentiles—minimum, 25th, 50th, 75th, and maximum values are calculated. These statistics provide insight into the data distribution before and after preprocessing as shown in Table 1.

Additionally, Z-score outlier detection is used to differentiate between genuine price movement and potential data errors. A total of 21 outliers are detected in the Close price based on Z-scores exceeding ± 3 as shown in Table 2 and Fig 10. However, after checking these 21 outliers are confirmed to be genuine price movements, and therefore, they are retained to preserve the dataset’s integrity.

3.3. Data splitting and preprocessing

Two distinct datasets are utilized for this research work. Dataset 1 covers the period from October 1st, 2014, to November 7th, 2022 while Dataset 2 covers the period from November 8th, 2022, to August 1st, 2023. For smooth model training and balance learning, the selected features are normalized using MinMaxScaler(), which scales values within the range of [0, 1] as shown in Fig 11, and the target variable is set as the daily close price.

3.4. Models training and evaluation

The attention-customized BiLSTM and XGBoost models are trained on 80% of the historical daily Bitcoin price data from Dataset 1. The remaining 20% is used for testing their performance. Subsequently, these pre-trained models are utilized for prediction on Dataset 2. The predicted values from both models are then fed into a pre-trained ensemble linear regression model to generate the final forecast. The model’s effectiveness is assessed through a comparative analysis between the final predictions and the actual values from Dataset 2. This analysis includes a thorough examination of the prediction errors.

3.5. Assessment criteria

The primary aim is to evaluate how well the ACB-XDE framework predicts outcomes. This evaluation uses three well-known statistical metrics. The main metric is the Mean Absolute Percentage Error (MAPE), which measures the average absolute percentage difference between predicted and actual values. MAPE is particularly useful in financial contexts as it shows prediction accuracy relative to the actual values. Additionally, auxiliary metrics such as the Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) are used in the evaluation process. RMSE highlights the spread of errors across the dataset by penalizing larger deviations more heavily than smaller ones. This metric is invaluable for identifying outliers or significant disparities between predicted and actual values. MAE offers a straightforward measure of prediction accuracy by calculating the average absolute difference between predicted and actual values. Its utility lies in providing an easily interpretable metric that considers all errors equally, irrespective of their magnitude or direction.

In Equations 23, 24, and 25, n is the amount of bitcoin data, is the real stock price, and is the predicted stock price data. Using MAPE as the main evaluation metric, along with RMSE and MAE as auxiliary metrics, provides a thorough assessment of the ACB-XDE framework.

(23)(24)(25)(26)

These metrics together offer a well-rounded view of prediction accuracy, helping in robust evaluations and informed decisions on the framework’s effectiveness. Furthermore, the statistical analysis is also performed with 95% confidence intervals (CI) [42] based on the metric of MAE as the main metric. This step enables considering variability in the predictions and, therefore, highlights the stability and consistency of the performance of the model from trial to trial. Variations in MAPE for each model are calculated over the 10 independent runs/iterations to ensure the sufficiency and reliability of the assessment in terms of the predictive capabilities of the proposed ACB-XDE framework. In Equation 26, MAPE is the Mean Absolute Percentage Error, is the Z-score corresponding to the desired confidence level (for 95%, = 1.96), is the standard deviation of the MAPE values and n is the number of samples (iterations or data points).

3.6. Hardware setup

The experimental hardware platform consists of an AMD Ryzen 7 4800H with Radeon Graphics (16 CPUs) running at 3GHz and 32GB RAM. The proposed ACB-XDE framework is implemented in Python, utilizing the py-XGBoost framework for XGBoost and the Keras DL framework for the attention-customized BiLSTM model.

3.7. Parametric configurations

This section delves into the configurations necessary for fine-tuning both the BiLSTM and XGBoost models. The choice of parameters significantly impacts the model’s predictive accuracy and generalization capabilities. Therefore, understanding the intricacies of model configuration is paramount for achieving optimal performance.

The choice of parameters significantly impacts the model’s predictive accuracy and generalization capabilities. Therefore, understanding the intricacies of model configuration is paramount for achieving optimal performance.

3.7.1. Attention-customized BiLSTM.

The performance of the attention-customized BiLSTM model is significantly influenced by factors such as the number of units, input feature dimension, and the number of layers. Detailed parameters configuration for this model is provided in Table 3.

3.7.2. XGBoost parameter configurations.

The evaluation of XGBoost is predominantly influenced by several key factors. These include the iterative decision tree process, the number of decision trees, the choice of a weak evaluator, the XGBoost objective function, the progress of model training, the control of model complexity, parameters of regular terms, and the sample size of random sampling with replacement. Table 4 lists all the XGBoost parameter configurations.

4. Results

A series of experiments is conducted to check the performance of the proposed ACB-XDE framework by employing two datasets, namely Dataset 1 covering the period between the 1st of October 2014 to the 7th of November 2022, and Dataset 2 extending from 8th November 2022 to the 1st of August 2023. The two datasets consist of the following features Bitcoin, daily opening price, daily closing price, daily trading volume, and daily high and low price ranges. Initially, the attention-customized BiLSTM and XGBoost models are individually trained and tested using Dataset 1, revealing the prediction errors for each model. In Figs 12 and 13, each model’s prediction ability is presented and evaluated based on some key error measures which include Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE). After computing errors, weights are assigned to the two prediction models using the error reciprocal method. A higher weight is assigned to the model demonstrating a smaller error. Subsequently, these weights are combined using a weighted averaging approach. The primary evaluation index MAPE is utilized in this process, resulting in weights of 0.4252 for the attention-customized BiLSTM and 0.5748 for XGBoost. Consequently, the ensemble of the attention-customized BiLSTM and XGBoost prediction models is formed, and the ensemble ACB-XDE framework undergoes training and testing utilizing linear regression. All the important error metrics show improvement using the ACB-XDE framework. MAPE improved by 27.45% from 0.51 to 0.37. MAE improved by 53.32% from 137.72 to 84.40. Similarly, RMSE improved by 38.59% from 144.73 to 106.14, as shown in Table 5.

thumbnail
Table 5. Error Analysis of the proposed framework and evaluation with the state-of-the-art model.

https://doi.org/10.1371/journal.pone.0320089.t005

thumbnail
Fig 12. Bitcoin price prediction using BiLSTM with new attention.

https://doi.org/10.1371/journal.pone.0320089.g012

Fig 14 illustrates the model’s performance in terms of metrics such as MSE, MAE, MAPE, and RMSE during both the training and validation phases, providing a comprehensive overview of the ensemble framework’s effectiveness. Furthermore, statistical analysis with a CI indicates that the ACB-XDE produces better results, showing a narrower CI range of MAPE (84.4 ± 5.24%).

thumbnail
Fig 14. The proposed ACB-XDE framework training and performance evaluation.

https://doi.org/10.1371/journal.pone.0320089.g014

The test results, outlined in Fig 15, further demonstrate the efficacy of the proposed ACB-XDE framework in real-world predictive scenarios. The second dataset (Dataset 2) is used to test the proposed ACB-XDE framework. It is fed to the pre-trained BiLSTM and XGBoost models, and their predictions are subsequently fed into the pre-trained ensemble model of linear regression to obtain the final prediction, as shown in Fig 16.

thumbnail
Fig 15. Training and Validating the proposed ACB-XDE framework.

https://doi.org/10.1371/journal.pone.0320089.g015

5 Discussion

The customized BiLSTM is employed to address the constraints of BTC-USD data, such as complex dependencies, high volatility, and irregular updates. The customized BiLSTM is chosen for its adeptness at handling long-term dependencies and sequential learning inherent in time series data. However, the utilization of a customized BiLSTM model introduces some challenges like susceptibility to overfitting, especially in the presence of noisy and limited financial data. The computational intensity and resources required for training BiLSTM add a layer of complexity. Moreover, the interpretability of BiLSTM is compromised due to its inherently black-box nature. In response to these limitations, new attention mechanisms and ensemble learning are introduced to improve interpretability by allowing selective focus on critical elements like daily closing price and daily trading volume within the input sequence. While XGBoost is seamlessly integrated into the proposed ACB-XDE framework to address the overfitting. XGBoost improves the ACB-XDE framework by incorporating effective regularization techniques and adapting to non-stationary data. XGBoost brings a complementary strength to the ensemble. The combination of new attention-customized BiLSTM and XGBoost is then consolidated through an ensemble approach. This strategic integration capitalizes on the distinct architecture of both models, fostering a more resilient and accurate prediction model.

The efficacy of the proposed ACB-XDE framework is validated by employing LSTM, BiLSTM, attention-LSTM, attention-BiLSTM, attention-customized BiLSTM, XGBoost, and the proposed ACB-XDE framework for data prediction. Table 5 presents the Error Analysis of the proposed framework alongside state-of-the-art models, as shown in Fig 17. Additionally, Fig 17 is locally enlarged in Fig 18 to provide a clearer observation of the trend and proximity between each model’s forecast results and the actual values. Similarly, Table 6 offers a comparative analysis of previous studies on Bitcoin across various distributions. This comparative analysis serves to underscore the performance of the proposed ACB-XDE framework in generating predictions. Additionally, the detailed performance gain and difference in price per day of the proposed ACB-XDE framework over the existing state-of-the-art models are shown in Figs 19 and 20, respectively. Moreover, Table 7 presents the 95% CI for the MAPE of the state-of-art the models including the proposed framework. The proposed ACB-XDE framework has the smallest interval range MAPE (84.4 ± 5.24%), which indicates that the predictions made by the proposed ACB-XDE framework are better than those predicted by LSTM, Attention-LSTM, XGBOOST, and others.

thumbnail
Table 6. Error analysis of the previous studies employed on bitcoin at various distributions.

https://doi.org/10.1371/journal.pone.0320089.t006

thumbnail
Fig 17. Comparison of the proposed ACB-XDE framework with state-of-the-art models.

https://doi.org/10.1371/journal.pone.0320089.g017

thumbnail
Fig 19. The proposed ACB-XDE framework and state-of-the-art models gain.

https://doi.org/10.1371/journal.pone.0320089.g019

thumbnail
Fig 20. Difference of the proposed framework and state-of-the-art models with actual price per day.

https://doi.org/10.1371/journal.pone.0320089.g020

5.1. Observations and trends from results

5.1.1. Trend comparison: Predicted values compared to actual values.

  1. a) Fig 17 presents a visual comparison between real values and predictions derived from various models, including LSTM, attention-LSTM, BiLSTM, attention-BiLSTM, New attention-BiLSTM, XGBoost models, and ACB-XDE. In contrast to the BiLSTM model, which achieves lower accuracy, the proposed ACB-XDE framework achieves the highest accuracy.
  2. b) The ACB-XDE’s curve closely aligns with the real value curve, showcasing superior fitting effects and a consistent trend.
  3. c) The ACB-XDE performs well in terms of accuracy and sensitivity to changes in proportionality. Fig 17 is locally enlarged in Fig 18 to provide a better understanding of patterns and the proximity between predicted and actual values.

5.1.2 . Localized enlargement (6 July - 01 August 2023).

Fig 18 highlights the enhanced consistency of the proposed ACB-XDE framework’s curve with the real value curve, surpassing benchmarks and other models. Figs 17 and 18 respectively demonstrate the better prediction accuracy of the proposed ACB-XDE framework compared to other models under consideration.

5.2. Comparison and analysis of errors

Table 5 outlines MAPE, MAE, and RMSE values for the six models. This analysis unveils valuable insights into their comparative performance.

  1. a) Model selection impact: The BiLSTM model surpasses LSTM, showcasing lower errors, and emphasizing its pivotal role in this study.
  2. b) Attention mechanism influence: A thorough model comparison highlights the profound influence of the attention mechanism on prediction accuracy across LSTM, attention-LSTM, BiLSTM, attention-BiLSTM, and attention-customized BiLSTM models.
  3. c) Benchmark model performance: XGBoost surpasses LSTM and BiLSTM, demonstrating superior prediction with the smallest error. Integrating XGBoost with attention-BiLSTM significantly enhances accuracy.
  4. d) Combined models superiority: Table 5 data highlights the ACB-XDE’s supremacy, presenting minimal MAPE, MAE, and RMSE values at 0.37, 84.40, and 106.14, respectively. Moreover, it has a narrower CI MAPE range (84.4 ± 5.24%). This highlights the ensemble approach’s effectiveness in minimizing overall prediction errors, surpassing individual models for better accuracy.

6. Conclusion

This paper introduces a novel ACB-XDE framework aimed at predicting the daily price of Bitcoin. Bitcoin is very complex and volatile. Bitcoin is chosen for this study because of its high volatility and significant impact on financial markets. Bitcoin price prediction provides a robust test case for the proposed ACB-XDE framework. The proposed ACB-XDE framework combines an attention-customized BiLSTM, which incorporates a new attention block, with a modified XGBoost algorithm. These models leverage the learning capabilities of complex sequential dependencies and recognize trends in speculative market behavior. The attention mechanism in the customized BiLSTM assigns weights dynamically and focuses on important features, with a particular emphasis on daily closing prices and daily trading volumes. XGBoost helps prevent overfitting and makes the algorithm more efficient, which improves overall predictive accuracy. In this research, the performance of the proposed ACB-XDE framework is tested against several state-of-the-art models, including new attention-BiLSTM, XGBoost, LSTM, BiLSTM, attention-LSTM, and attention-BiLSTM. The ACB-XDE framework shows improved performance in handling complex sequential dependencies, high volatility, and dynamic patterns. Empirical validation on the BTC-USD dataset shows ACB-XDE’s MAPE of 0.37%, MAE of 84.40, and RMSE of 106.14, outperforming the existing models. Compared to the new attention-BiLSTM, the ACB-XDE framework significantly reduced error metrics: MAPE decreased from 0.51 to 0.37 (around 27.45% reduction), MAE from 137.72 to 84.40 (about 53.32% reduction), and RMSE from 144.73 to 106.14 (about 38.59% reduction). Statistical analysis, such as CI confirms the improved performance of ACB-XDE. It has the smallest CI range (84.4 ± 5.24%). These improvements highlight the improved predictive accuracy of the ACB-XDE framework. In summary, the ACB-XDE framework effectively addresses the limitations of traditional and modern forecasting methods, showing better generalization capabilities. Future research directions include evaluating the framework across diverse datasets, expanding the array of evaluation indicators, and optimizing parameters and hyperparameters using methodologies like Bayesian optimization. Additionally, incorporating external influences, legal and regulatory aspects, and seasonality trends as input features will further refine the model. These strategic considerations aim to extend the application of the ACB-XDE framework to various fields, thereby broadening its impact and contributing to advancements in predictive modeling across multiple domains. Moreover, the proposed framework will be employed in multi-step-ahead forecasting.

Acknowledgments

The authors gratefully acknowledge the support of the Researchers Supporting Project number (RSP2025R493), King Saud University, Riyadh, Saudi Arabia. Additionally, the authors extend their thanks to the Artificial Intelligence Lab, Department of Computer Systems Engineering, University of Engineering and Applied Sciences (UEAS), Swat, for providing the necessary resources to carry out this research.

References

  1. 1. Md A, Kapoor S, Chris C, Sivaraman A, Tee K, Sabireen H. Novel optimization approach for stock price forecasting using multi-layered sequential LSTM. Appl Soft Comput. 2023;134:109830.
  2. 2. How Many Cryptocurrencies are There in 2024? [Internet]. [cited 2024 Jul 25. ]. Available from: https://explodingtopics.com/blog/number-of-cryptocurrencies
  3. 3. Cryptocurrency Prices, Charts And Market Capitalizations | CoinMarketCap [Internet]. [cited 2024 Jul 25. ]. Available from: https://coinmarketcap.com/
  4. 4. Online [Internet]. Stock Market Speculation. Available from: https://study.com/academy/lesson/what-is-speculation-in-the-stock-market.html
  5. 5. Almeida L, Vieira E. Technical analysis, fundamental analysis, and Ichimoku dynamics: a bibliometric analysis. Risks. 2023;11(8):142.
  6. 6. Zhang J, Ye L, Lai Y. Stock price prediction using CNN-BiLSTM-attention model. Math. 2023;11(9):1985.
  7. 7. Singh P, K R, Ruliana R, Pandey A, Gupta S, Saleh AA, et al. Comparison of ARIMA, SutteARIMA, and holt-winters, and NNAR models to predict food grain in India. Forecast. 2023;5(1):138–52.
  8. 8. Shilpa BL, Shambhavi BR. Combined deep learning classifiers for stock market prediction: integrating stock price and news sentiments. Kybernetes. 2023;52(3):748–73.
  9. 9. Zheng M. Studying stock prediction in the context of machine learning exemplified by analyzing SVM and LSTM models. Highlights Sci Eng Technol. 2024;85:1025–31.
  10. 10. Vuong PH, Phu LH, Van Nguyen TH, Duy LN, Bao PT, Trinh TD. A bibliometric literature review of stock price forecasting: From statistical model to deep learning approach. Sci Prog. 2024;107(1):368504241236557. pmid:38490223
  11. 11. Sheth D, Shah M. Predicting stock market using machine learning: best and accurate way to know future stock prices. Int J Syst Assur Eng Manag. 2023;14(1):1–18.
  12. 12. Gülmez B. Stock price prediction with optimized deep LSTM network with artificial rabbits optimization algorithm. Expert Syst Appl. 2023;227:120346.
  13. 13. Rauf Z, Sohail A, Khan SH, Khan A, Gwak J, Maqbool M. Attention-guided multi-scale deep object detection framework for lymphocyte analysis in IHC histological images. Microscopy (Oxf). 2023;72(1):27–42. pmid:36239597
  14. 14. Zahoor MM, Khan SH. Brain Tumor MRI Classification using a Novel Deep Residual and Regional CNN. 2022 Nov 29 [cited 2023 Oct 6. ]. Available from: https://arxiv.org/abs/2211.16571v2
  15. 15. Khan S, Iqbal R, Naz S. A recent survey of the advancements in deep learning techniques for monkeypox disease detection. Inst Univ Educ Física y Deport. 2023;9(2):43–56.
  16. 16. Khan S, Alahmadi T, Alsahfi T, Alsadhan A, Mazroa A, Alkahtani H. COVID-19 infection analysis framework using novel boosted CNNs and radiological images. Scie Rep. 2023;13(1):21837.
  17. 17. Khan SH, Shah NS, Nuzhat R, Majid A, Alquhayz H, Khan A. Malaria parasite classification framework using a novel channel squeezed and boosted CNN. Microscopy (Oxf). 2022;71(5):271–82. pmid:35640304
  18. 18. Khan S, Alahmadi T, Ullah W, Iqbal J, Rahim A, Alkahtani H. A new deep boosted CNN and ensemble learning based IoT malware detection. Computers & Security. 2023;133:103385.
  19. 19. Asam M, Khan SH, Jamal T, Zahoora U, Khan A. Malware Classification Using Deep Boosted Learning. 2021 Jul 8 [cited 2023 Oct 6. ]. Available from: http://arxiv.org/abs/2107.04008
  20. 20. Qamar S, Khan S, Arshad M, Qamar M, Gwak J, Khan A. Autonomous drone swarm navigation and multitarget tracking with island policy-based optimization framework. IEEE Access. 2022;10(1):91073–91.
  21. 21. Arshad M, Khan S, Qamar S, Khan M, Murtza I, Gwak J. Drone navigation using region and edge exploitation-based deep CNN. IEEE Access. 2022;10(1):95441–50.
  22. 22. Khan AH, Shah A, Ali A, Shahid R, Zahid ZU, Sharif MU, et al. A performance comparison of machine learning models for stock market prediction with novel investment strategy. PLoS One. 2023;18(9):e0286362. pmid:37733720
  23. 23. Zhao Y, Yang G. Deep learning-based integrated framework for stock price movement prediction. Applied Soft Computing. 2023;133:109921.
  24. 24. Shah J, Vaidya D, Shah M. A comprehensive review on multiple hybrid deep learning approaches for stock prediction. Intelligent Syst Appl. 2022;16(1):200111.
  25. 25. Guo Y, Guo J, Sun B, Bai J, Chen Y. A new decomposition ensemble model for stock price forecasting based on system clustering and particle swarm optimization. Applied Soft Computing. 2022;130(1):109726.
  26. 26. Cui C, Wang P, Li Y, Zhang Y. McVCsB: A new hybrid deep learning network for stock index prediction. Expert Syst Appl. 2023;232(1):120902.
  27. 27. Zhang S, Luo J, Wang S, Liu F. Oil price forecasting: A hybrid GRU neural network based on decomposition–reconstruction methods. Expert Syst Appl. 2023;218:119617.
  28. 28. Xu T, He X. EMD-BiLSTM stock price trend forecasting model based on investor sentiment. Frontiers in Computational Intelligence Systems. 2023;4(3):139–43.
  29. 29. Zhao S, Lin X, Weng X. Attention-BiLSTM stock price trend prediction model based on empirical mode decomposition and investor sentiment. J Comput Appl [Internet]. 2023 Jun 30 [cited 2023 Oct 24. ];43(S1):112. Available from: http://www.joca.cn/EN/10.11772/j.issn.1001-9081.2022060863
  30. 30. Sharma A, Verma CK, Singh P. Enhancing option pricing accuracy in the Indian market: a CNN-BiLSTM approach. Comput Econ. 2024.
  31. 31. Chiranjeevi M, Karlamangal S, Moger T, Jena D. Solar irradiation prediction hybrid framework using regularized convolutional BiLSTM-based autoencoder approach. IEEE Access. 2023;11:131362–75.
  32. 32. Ateeq K, Al Zarooni A, Rehman A, Khan M. A mechanism for bitcoin price forecasting using deep learning. Int J Adv Comput Sci Appl. 2023;14(8):441–8.
  33. 33. Chen J. Analysis of Bitcoin price prediction using machine learning. J Risk Financial Management. 2023;16(1):51.
  34. 34. Ranjan S, Kayal P, Saraf M. Bitcoin price prediction: A machine learning sample dimension approach. Comput Econ. 2023;61(4):1617–36.
  35. 35. Zhu Y, Ma J, Gu F, Wang J, Li Z, Zhang Y. Price prediction of bitcoin based on adaptive feature selection and model optimization. Mathematics. 2023;11(6):1335.
  36. 36. Seabe P, Moutsinga C, Pindza E. Forecasting cryptocurrency prices using LSTM, GRU, and bi-directional LSTM: A deep learning approach. Fractal and Fractals. 2023;7(2):203.
  37. 37. Patel R, Chauhan J, Tiwari NK, Upaddhyay V, Bajpai A. A Deep Learning Framework for Hourly Bitcoin Price Prediction Using Bi-LSTM and Sentiment Analysis of Twitter Data. SN COMPUT SCI. 2024;5(6).
  38. 38. Kolemen E, Egrioglu E, Bas E, Turkmen M. A new deep recurrent hybrid artificial neural network of gated recurrent units and simple seasonal exponential smoothing. Granular Computing. 2024;9(1):7.
  39. 39. Qin C, Qin D, Jiang Q, Zhu B. Forecasting carbon price with attention mechanism and bidirectional long short-term memory network. Energy. 2024;299:131410.
  40. 40. Ranjan S, Kayal P, Saraf M. Bitcoin Price Prediction: A Machine Learning Sample Dimension Approach. Comput Econ. 2022;61(4):1617–36.
  41. 41. Khosravi M, Ghazani M. Novel insights into the modeling financial time-series through machine learning methods: Evidence from the cryptocurrency market. Expert Systems with Applications. 2023;234:121012.
  42. 42. Khan S, Iqbal J, Hassnain S, Owais M, Mostafa S, Hadjouni M. COVID-19 detection and analysis from lung CT images using novel channel boosted CNNs. Expert Syst Appl. 2023;229:120477.
  43. 43. Liapis CM, Karanikola A, Kotsiantis S. Investigating Deep Stock Market Forecasting with Sentiment Analysis. Entropy (Basel). 2023;25(2):219. pmid:36832586
  44. 44. Wirawan I, Widiyaningtyas T, Hasan M. Short term prediction on bitcoin price using ARIMA method. In: 2019 International Seminar on Application for Technology of Information and Communication (iSemantic). IEEE; 2019. p. 260–5. Available from: https://ieeexplore.ieee.org/document/8884257/
  45. 45. Livieris IE, Kiriakidou N, Stavroyiannis S, Pintelas P. An advanced CNN-LSTM model for cryptocurrency forecasting. Electronics. 2021;10(3):287.
  46. 46. Li Y, Dai W. Bitcoin price forecasting method based on CNN‐LSTM hybrid neural network model. J Eng. 2020;2020(13):344–7.
  47. 47. Wen N, Ling L. Evaluation of cryptocurrency price prediction using LSTM and CNNs models. JOIV Int J Informatics Vis. 2023;7(3–2):2016–24.