Figures
Abstract
Fabric tearing performance testing experiment is an important part of evaluating fabric durability. The aim of this paper is to solve the problem of real-time prediction of fabric tearing performance testing by effectively extracting key features from experimental data and constructing a prediction model applicable to the process of fabric tearing performance testing. In this study, the trend prediction model for the experimental process of fabric tear performance testing (BLTT-FT) based on the “bidirectional long- and short-term attention mechanism” is adopted. A prediction model combining the improved Bi-directional Long Short-Term Memory (BiLSTM) structure, Transformer encoding layer, and Temporal Convolutional Network (TCN) layer is proposed. While considering sequence information globally, the model captures the bidirectional dependence of time series, reduces model complexity through the TCN layer, and finally optimizes prediction accuracy via the fully connected layer and activation function, thus achieving multi-step prediction. Analysis of variance (ANOVA) indicates that, across multiple datasets constructed from fabrics with different elasticity grades, the model shows extremely significant differences (p < 0.001) in the metrics of Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE) at each prediction step. Furthermore, it maintains a low error level even in the long-range prediction scope: the average RMSE of multi-step prediction is 0.0881, the average MAE of multi-step prediction is 0.0609, the average MAPE of multi-step prediction is as low as 3.06%, and the average coefficient of determination (R2) of multi-step prediction is as high as 0.9572. The ablation experiments confirm that multi-modular hierarchical modeling effectively solves the problem of detail accuracy of single-step prediction and long-range dependence of multi-step prediction. The results show that the proposed model performs well in real-time trend prediction results for different data sets constructed from fabrics with different elasticity grades. By predicting the dynamics of the experimental process of fabric tearing performance testing in real time, this study has exploratory value in improving the experimental efficiency and optimizing the experimental process.
Citation: Jiao Q, Zhang Y, Lu Y, He B, Zhu M, Wang K (2025) Trend analysis and prediction of fabric tear performance testing processes based on the BLTT-FT model. PLoS One 20(12): e0336501. https://doi.org/10.1371/journal.pone.0336501
Editor: Jinran Wu, University of Queensland - Saint Lucia Campus: The University of Queensland, AUSTRALIA
Received: July 22, 2024; Accepted: October 27, 2025; Published: December 1, 2025
Copyright: © 2025 Jiao et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data supporting the results of this study have been stored on the Github website at: https://github.com/yuyuyu123YUYUYU/DS.
Funding: This study was funded by the following grants: 1. Research and Application of Key AI Evaluation Technologies for Fabric Pilling Based on Machine Vision/Zhejiang Provincial Market Supervision Administration (Project No. ZD2025007). 2. Research and Establishment of Data Identification Model for Abnormal Behaviors in Inspection and Testing/Open Project of the National Market Regulation Technology Innovation Center (Digital Research and Application of Market Regulation) (Project No. 2024SF02WX0007). 3. Science and Technology Planned Project of the State Administration for Market Regulation (Project No. CY2023213). 4. “Chu Ying” Project (Core Project) of Zhejiang Administration for Market Supervision (2022MK057). 5. Natural Science Foundation of Zhejiang Province (Project No. LGG20F020008).
Competing interests: The authors have declared that no competing interests exist.
Introduction
With the development of modern industrial technology, elastic fabrics have been widely used in many fields such as clothing and medical treatment by virtue of their excellent elasticity and comfort [1,2]. The types of fabric structures are categorized into four types: woven fabrics, braided fabrics, tertiary fabrics, and nonwovens. Compared with other fabrics, woven fabrics have good dimensional stability and the highest covering yarn stacking density in the warp and weft directions. Whenever a new fabric is produced, a series of tests such as tearing performance tests, pilling, tensile strength, etc., are usually carried out, which play a vital role in assessing the quality of the product [3]. The samples are tested and analyzed, and if the sample passes all tests and meets the requirements, the fabric is ready for mass production. The tearing performance of textiles is an important analytical judgment index for evaluating the performance of clothing [4], and the tearing phenomenon is that the yarns in the fabric break in sequence until the fabric is completely torn, which is closer to the situation of sudden rupture in the actual use, and reflects the toughness performance of textiles more effectively.
With the continuous progress of cutting-edge technologies such as sensors, artificial intelligence, big data, and the industrial Internet of Things, the traditional manufacturing industry is accelerating its transformation to smart manufacturing [5]. Scholars at home and abroad have carried out extensive exploration of various industrial experimental processes, especially focusing on the innovation and development of information acquisition means, based on which, the experimental monitoring technology can be mainly divided into two categories: direct monitoring and indirect monitoring [6]. Traditional mechanical testing methods are mainly used to measure and evaluate the tear strength of elastic fabrics, including pendulum, single tongue, or trapezoidal specimen tear tests, etc. These methods obtain tear strength data by applying force at a fixed rate until the fabric undergoes tearing [7], and although the direct monitoring methods have played a role in the past, there are limitations in their ability to achieve real-time online tracking and analysis of the data. In the context of today’s scientific research and Industry 4.0 era, with the deep integration of sensor technology, artificial intelligence, big data analysis, and Internet of Things (IoT) technology, the traditional mode of experimental monitoring and analysis is gradually shifting to the direction of intelligence and predictability. Indirect monitoring technology continuously captures multivariate physical signals in the experimental process by integrating a high-precision sensor network [8] and utilizes machine learning and deep learning algorithms to excavate the hidden laws behind the physical signals and the complex correlation between the experimental process, a strategy that lays a solid data foundation for intelligent prediction [9]. Sensors have been deployed in the test environment in some existing studies to indirectly acquire test data and analyze the trend of the experimental process in order to achieve the effect of tracing and predicting the experimental process [10]. Kuntoğlu M et al. [11] systematically analyzed the correlation between sensor data and tool wear and developed a multi-sensor tool condition monitoring system to effectively identify the wear state of the tool by monitoring the type of energy used in the cutting process, thus enabling real-time monitoring of the cutting process. Pazikadin AR et al. [12] used an artificial neural network to predict solar power generation data from data measured by a solar irradiance sensor.
When this type of activity is performed in a fabric performance testing laboratory, it is susceptible to factors such as equipment operation, personnel handling, and test environment, and lacks non-intrusive system awareness. In the preliminary stage, we proposed a multi-source data-driven state perception and classification study for fabric tearing performance detection [13], which solved the problem that the experimental results may be affected by the operator’s personal experience and subjective judgments through the designed non-intrusive system [14], and ensured the consistency and accuracy of the judgments of each state transition, as well as carried out the indirect monitoring of the electricity consumption. The value of electrical power as an indicator of the operating status of equipment is manifested at several levels [15], especially in the fields of industrial automation, equipment monitoring, and fault diagnosis.
In textile tearing performance testing equipment, the electric power parameter reflects the real-time status and potential problems of the equipment operation. The electrical power reflects the energy consumption of the equipment during operation, with different work corresponding to different energy consumption patterns. Monitoring the changes of electric power parameters in the experimental process in real-time provides the experimenter with immediate decision support [16] to optimize the experimental process, and improve the experimental efficiency and the reliability of the results. In a previous study of the comparison between electrical power parameters and mechanical characteristics, it was found that the fluctuation trends of the two were consistent, proving the idea that mechanical variations can be represented with less power-aware data, but predictive studies are still lacking in their basis. This study is dedicated to bridging this research gap and seeking a more effective state prediction method for further optimization of the experimental process.
The research in this paper includes the following aspects:
(1) In this paper, we propose an experimental process trend prediction model (BLTT-FT) based on the “Bidirectional Long and Short-Term Attention Mechanism” for fabric tearing performance testing. An innovative prediction framework is formed by combining Transformer, Temporal Convolutional Neural Network (TCN) and improved BiLSTM.
(2) The proposed model combines a BiLSTM structure composed of improved LSTMs, a Transformer encoding layer, and a TCN layer. It utilizes the improved Bi-LSTM to capture the bidirectional dependence of sequences, employs the self-attention mechanism of the Transformer encoding layer to consider sequence information globally, and then adopts the TCN layer to optimize the processing of variable-length sequences so as to reduce model complexity. Finally, the prediction accuracy is optimized through the fully connected layer and activation function.
(3) The proposed model is utilized for multi-step prediction of power changes, and the effectiveness and accuracy of the model in prediction are verified through comparative analysis with other models, and the synergy of the model components is remarkable, especially when dealing with the tearing experimental data of fabrics with different elasticity levels, which demonstrates excellent prediction performance and strong generalization ability. The model cleverly combines with the previous situational awareness system, and makes full use of the electric power parameter to predict the trend of the equipment during the fabric tearing experiments.
In summary, the aim of this study is to develop a BLTT-FT model based on electrical power sequences for application in real-time prediction of fabric tearing experimental processes. By carefully analyzing the mapping relationship between the signal characteristics of the electric power sequence and the whole experimental process, on the basis of the historical electric power data collected in the previous period, the existing functions of the situational awareness system are improved and perfected, and the overall prediction function for the experimental process of fabric tearing is added, so as to improve the efficiency and accuracy of the experiments and to reduce the experimental processes to a certain extent and optimize the whole experimental process.
Related work
Application of artificial intelligence in fabric performance prediction
In recent years, more and more researchers have begun to adopt data-driven prediction methods to establish prediction models based on the static attributes of woven fabrics (e.g., fiber type, yarn specification, fabric structure, etc.), and have achieved a certain degree of prediction accuracy.
Ahirwar M et al. [17] developed a machine learning-based neural network approach to predict the performance of fabric using fabric parameters as input and warp and weft tear strength as output. HOSSAIN MM et al. [18] used a correlation regression model to explain the effect of structural parameters on the tear and tensile strength of various base fabric designs to develop a neural network model for predicting tear and tensile strength. Ribeiro R et al. [19] successfully developed models with high prediction accuracy by applying machine learning techniques and analyzing multiple features of the textile production process. Xiao Q et al. [20] proposed an intelligent ball-starting prediction model based on the BP neural network and an optimization model based on a genetic algorithm to improve the training speed and accuracy of the ball-starting prediction. Tu Y F et al. [21] conducted a systematic review of AI-driven fabric performance and handle prediction technologies (focusing on model mechanisms, dataset diversity, and prediction accuracy). They identified research gaps and challenges in this field, providing practical references for improving AI prediction capabilities and guiding future innovations in textile technology. Sarkar J et al. [22] used Adaptive Neuro-Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN) methods to develop predictive models for textile substrate absorption properties, which can help in the scale-up of functional textiles. Doran EC et al. [23] proposed artificial neural network (ANN) and support vector machine (SVM) models to predict the quality characteristics of cotton/elastane fiber core yarns using fiber quality and spinning parameters.
Most of the existing research focuses on the prediction of finished product quality, while the real-time monitoring and prediction of the fabric experimental process itself is rarely involved. In addition, most of the existing monitoring techniques focus on data analysis under static conditions and lack a real-time response mechanism to dynamic changes in the experiment, which cannot meet the needs of the modern textile industry for efficient, accurate, and intelligent production processes.
Predictive applications of time series data
Since the data during the fabric performance experiments are all time-series data, the methods for time-series data prediction are broadly categorized into three types: statistical analysis, machine learning, and deep learning. Guo N et al. [24] created a hybrid prediction model by combining ARIMA (Autoregressive Integrated Moving Average) with SVR (Support Vector Regression) to more accurately predict electricity consumption data collected in an Internet of Things (IoT) environment. Xie Y et al. [25] proposed a hybrid model of ARIMA and triple exponential smoothing, which can accurately predict linear and nonlinear relationships in container resource loading sequences. However both ARIMA and exponential smoothing methods rely heavily on historical data, and they are not the right choice for forecasting long-term time series if there is a lot of variability in the data.
In recent years, deep learning has achieved excellent results in time series data prediction. BiLSTM is able to better capture the contextual information when dealing with sequential data: Guo Y et al. [26] proposed an MES combined load prediction method based on bi-directional long short-term memory (BiLSTM) multi-task learning and achieved good prediction results; Wu K et al. [27] proposed a hybrid prediction model based on wavelet threshold denoising (WTD), variational modal decomposition (VMD), and bi-directional long and short-term memory (BiLSTM) networks in order to reduce the short-term household load forecasting errors due to small load sizes and different residential electricity consumption behaviors, which provided more stable and accurate predictions under trend feature extraction, and improved the short-term household load forecasting accuracy. Transformer effectively captures long-term dependencies in time-series data, ensuring chronological accuracy, and its flexible model structure allows for adjustments to accommodate data of varying complexity, while the encoder-decoder architecture is particularly well suited for predicting future points in time [28]: Qu K et al. [29] applied the Transformer model in Natural Language Processing (NLP) to the field of wind power prediction, which not only can accurately extract different correlation levels between multiple wind farms but also can give accurate wind power prediction results; Reza, S. et al. [30] designed a multi-attention based transformer model for traffic flow prediction using five identical encoder and decoder layers and combined a comparative analysis between gated recurrent units and a long term memory based model with good performance in effectively predicting long term traffic flow patterns. Guo J et al. [31] proposed a hybrid method for bearing failure prediction: constructing health indicators through CEEMDAN and KPCA, extracting multi-domain features using dual-channel Transformer with CBAM, and realizing RUL probability prediction by combining the 3σ criterion and Wiener’s process, which verified the time-series modeling potential of Transformer. In recent years, temporal convolutional neural networks (TCNs) have emerged as a new approach for dealing with temporal problems, with significant advantages in weight sharing and convolutional local perception, and have achieved excellent performance in fusion prediction with other models: Lu P et al. [32] used TCN to extract the hidden temporal features in wind power data to establish the Informer wind power prediction model; Liu S et al. [33] and others proposed parallel structure TCN-LSTM wind power prediction model based on Savitzky-Golay filtering and TCN; Zhang G et al. [34] proposed a novel hybrid model based on adaptive quadratic decomposition method and robust time convolution network (RTCN) for wind speed prediction.
This paper will extend the applicability of these models to a wider range of complex prediction tasks, including dynamic correlation and trend prediction of electrical power sequences and experimental processes for fabric tear performance testing.
Research methods
Overall model design
The model developed in this paper predicts the trend of electric power parameters monitored during the experimental process of fabric tearing performance, and the overall work consists of the following three parts: data preprocessing part, data division part, and BLTT-FT part. The main flow is shown in Fig 1.
Flow of electrical parameters prediction for the experimental process of fabric tearing performance based on BLTT-FT fusion model.
The model first receives the raw data stored by the sensors in the database and composes the complete sample data by taking the electric power parameters in the order of the moments when the experiment is performed. Then the data is preprocessed, mainly for missing values, smoothing, and data leakage. Data partitioning presents an innovative integration of the principles of discrete-time state machines and the concept of dynamically tuned rolling windows, focusing on the prediction of multi-step states occurring after a single “change- tensile -reset” state. The model is trained on the divided data, and then gradually rolls forward. Whenever the “change- tensile-reset” state is completed once, the state window will roll to the next state point, and the BLTT-FT prediction model will be updated accordingly, forming a closed loop of continuous iteration and optimization. In this way, the real-time prediction environment is simulated and the output prediction results are obtained.
BLTT-FT system network construction
In this paper, the Transformer model is used as the basis, and a temporal convolutional network (TCN) is introduced to modify it so that it is more suitable for the prediction of time-series data in this paper. The Transformer model allows for parallel computing and good access to global information, however, the ability to capture global information is weak and not ideal for direct use as a predictor. TCN captures both high-level and low-level features with stable gradients and enables the model to process time series information in parallel, improving the prediction accuracy and training efficiency of the model. The improved Transformer model can fully utilize the advantages of Transformer and TCN to better predict the sequence data, and the improved model is called the “Trans-TCN” network structure.
However, the “Trans-TCN” network structure lacks sequential information, and although its position encoding uses sine and cosine to model positions, it is not sufficient for complete modeling and suffers from a certain lack of information. Therefore, the data will be processed using a bi-directional long and short-term memory network to capture the sequential dependencies before transferring them to the “Trans-TCN” polytope self-attention point. Due to the large sample size of the dataset, this paper will use the BiLSTM network model composed of improved LSTM to speed up the data learning speed, and the improved BiLSTM neural network model is called “Bi-LSTM” network architecture. Therefore, combining it with “Trans-TCN” can improve the prediction efficiency of the model.
In summary, after the input data is processed by the positional coding layer, the sequence features are captured by the “Bi-LSTM” and then input to the “Trans-TCN”. Specifically, it is first processed by the encoder of the Transformer, then the features are further extracted by the TCN layer, and finally, the full connectivity layer and activation function are utilized for dimensionality reduction. The main structure consists of a position coding layer, a Bi-LSTM layer, a Transformer encoder layer, a TCN layer, and a fully connected layer, and the overall network structure is called “BLTT-FT” as shown in Fig 2.
The main structure consists of a position coding layer, a Bi-LSTM layer, a Transformer encoder layer, a TCN layer, and a fully connected layer.
Trans-TCN network construction
This section describes in detail the improvement of TCN on the Transformer model, and the improved structure is called the “Trans-TCN” network structure. Although the Transformer model was originally designed for machine translation, it can still be applied to time series prediction by exploiting its architectural potential. Therefore, its encoder part is used as the basic model. The “Trans-TCN” network structure is shown in Fig 3.
The left half of the Trans-TCN network structure is the Transformer encoder, and the right half is the TCN and fully connected layer.
(1) The Transformer encoder consists of multiple encoder layers, each consisting of two sub-layer blocks. The first sublayer block includes a multi-head attention layer and a normalization layer connection; the second sublayer block includes a feed-forward layer and a normalization layer connection. The core part of the coding layer is the multi-head attention mechanism, and this part is the key part of the Trans-TCN network structure to realize accurate prediction.
A multiple attention strength mechanism consists of multiple heads with self-attention. The self-attention process in the multi-head attention mechanism can be described as a process solved by a query vector and a set of key-value vector matrices. Where the query vector (Q), key vector (K), and weight vector (V) are transformed from the previous set of outputs.In the actual operation, the model simultaneously computes the attention functions on a set of query vectors packs them into a matrix Q, and packs the key vectors and value vectors into matrices K and V. The output of the query vector is actually determined by the hidden vectors encoded in the previous layer, and the matrices K and V are assigned the same value as Q in the self-concern. The result of the output calculation is the weighted value, which is derived from the compatibility operation between the query vector and the key vector. is the input to the multi-head self-attentive module. The key vectors, weight vectors, and query vectors are calculated as shown in Eq (1):
where is the trainable projection matrix. The derived results are utilized and the Billet dot product attention calculation is performed as shown in Eq (2):
In order to jointly focus on the information from different representation subspaces at different locations, further optimization is required by using H parallel attention computations, in the case of the multi-head attention mechanism computation process shown in Eq (3):
The functional realization of the coding layer also passes through the feedforward layer The feedforward network consists of two linear transformations connected in the middle by a ReLU activation function, which is applied separately and with the same status for each time step. The linear transformation formulas are shown in Eq (4):
where WT is the weight matrix and b is the bias vector.
(2) The TCN consists of multiple layers of residual blocks (RBs), and each RB is mainly composed of two layers: the dilated causal convolution (DCC), the weight initialization layer, and the ReLU. As shown in Fig 4A, the output of the Transformer encoder module is used as the input of the first layer of the TCN RB. Fig 4B shows the structure of DCC.
A:Structure of TCN residual block. B:Structure of TCN expansion causal convolution.
Among them, the weight initialization layer and the Droupt layer are mainly used to suppress the network noise and optimize the network training effect, and then in order to ensure that the input and output dimensions are the same, a 1x1 convolution module is introduced, and finally the deep TCN can be constructed by continuously stacking the RBs. The combination of DCC and RB network structure can effectively improve the robustness and feature learning ability of TCN models.
Here is a brief introduction to DCC. Recurrent neural networks rely on historical data to process information and are unable to achieve massively parallel computation like CNNs, TCNs are an improved CNN that compensates for the shortcomings of traditional CNNs in dealing with long-term data dependency by introducing DCC, which ensures that the model utilizes only historical data to make predictions and thus avoids interference from future data. The input sequence undergoes DCC to obtain the output sequence
, and d is the number of expansions. Together, these elements improve the generalization ability of the model, accelerate the training process, and effectively prevent the overfitting problem, and the calculation of DCC is shown in Eq (5):
where ⋅ is the convolutional computation, d is the number of expansions, s is the neuron, k is the convolutional kernel, f(i) is the ith data in the convolutional kernel,and denotes the result of convolving the past data.
In summary, the parts of TCN that improve on the Transformer model are shown in the following:
1. Deleted “Input Embedding”, which is a module for vectorization of language and text, is required for machine language translation and does not need to be vectorized for power.
2. Replace the transformer decoder with a TCN layer, a fully connected layer (FC-Linear), and an activation function Tanh (activation function).
3. The other inputs to the decoder are removed, leaving the output of the encoder as the only input to the decoder.
Bi-LSTM network construction
The BiLSTM neural network model is constituted by two independent LSTM networks, where the sequence information is fed into the network separately by means of forward and reverse order, making forward and backward information available to each LSTM unit.
(1) Recurrent Neural Networks (RNN) are deep neural networks used to predict time series, however, RNNs suffer from the problem of gradient vanishing, which is solved by the emergence of LSTM. The LSTM has three special gates, i.e., the forgetting gate ft, the input gate it, and the output gate ot, as well as the hidden unit , which is used to regulate the information flow in the network. where Ct−1 is the output of the previously hidden cell, Ct is the output of this hidden cell, ht−1 is the output of the previous cell, ht is the output of this cell, xt is the input at this moment, σ is the sigmoid function, and tanh is the hyperbolic tangent function.
Due to the large sample size of the dataset, the traditional LSTM, although performing well, leads to a large number of parameters that need to be learned for each gate, which results in a long-running time for the LSTM, so an improved LSTM network is proposed to speed up the data learning. The structure of the LSTM network is shown in Fig 5, and the structure of the improved LSTM network is shown in Fig 6.
The LSTM has three special gates.
The improved LSTM structure retains only 1 gate.
The improved LSTM structure retains only 1 gate, which can effectively reduce the number of parameters while extracting the hidden timing information to ensure the accuracy of the model. The specific formula is shown in Eq (6):
where: Xt is the input; Ht−1 is the upper moment output; ,
, Wxg, Whg and Pcg represent the corresponding weight matrices, respectively;
and bg represent the bias vectors; and gt is the output gate.
(2) The BiLSTM model is constituted by the above improved LSTM, which is a network in which the output vectors (i.e., extracted feature vectors) of the forward LSTM and the backward LSTM are spliced to form a vector as the final output. The final output vector is calculated as shown in Eq (7):
where the output of the forward LSTM is and the output of the backward LSTM is
.
BiLSTM is modeled so that the feature data obtained at moment t has information from the previous and subsequent moments, which results in better feature extraction efficiency and performance compared to a single LSTM structure. The model structure is shown in Fig 7, where Mt is the weight matrix and Zt is the output vector. The BiLSTM model using the improved LSTM composition is called “Bi-LSTM” in this paper.
Bi-LSTM network structure composed of improved LSTMs.
Data preparation and processing
Dataset construction
Experimental sampling for fabric tearing performance testing.
This paper is based on the international standard “ISO 139372-2000 Textile fabrics-Tear properties-Part 2: Determination of tear strength of trouser specimens (single seam)” to carry out the test experiment of tear properties of trouser specimens. For each laboratory fabric samples should be cut 2 groups of specimens, a group of radial and a group of weft, specimens for the rectangular selection of samples, each group of samples should be 5 pieces of specimens, and every two pieces of specimens can’ t contain the same length or width of the direction of the yarn, should be avoided at the crease, the edge of the fabric and the fabric on the unrepresentative area, can’t be taken in the distance from the edge of the fabric within 150mm sample from the lab samples to be cut from the sample is shown in Fig 8A for example. Trouser shaped specimens were (200±2) mm long and (50±1) mm wide. Each specimen should be cut from the middle of the width direction with a split of (100±1) mm long parallel to the length direction, and the end point of tearing was marked at (25±1) mm from the uncut end in the middle distance of the specimen as shown in Fig 8B. The denser yarns are the warp yarns and the less dense yarns are the weft yarns, as shown in Fig 8C. The experimental principle of trouser specimen tearing performance testing is to clamp the two legs of the trouser specimen, so that the specimen notch line between the upper and lower fixtures into a straight line, to set the length of the spacing of the fabric tear tester to 100 mm, the tensile rate is set to 100 mm/min. open the instrument will be tensile force is applied in the direction of the notch, as shown in Fig 8D. All samples were cut from the middle of wrinkle-free and undamaged fabric bolts, avoiding areas within 150 mm of the fabric edges as well as defective regions such as yarn joints and stains. For samples taken from the same fabric, the warp/weft direction samples do not contain overlapping yarns to ensure the independence of each sample. After sampling, the samples were pre-treated in a constant temperature and humidity chamber (20±2 °c, 65±4% RH) for 24 hours to eliminate the impact of environmental factors on the physical properties of the fabrics.
A: Example of a specimen cut from a laboratory sample. B: Standardized specimen charts. C: Detail of warp and weft yarns. D: Schematic diagram of tearing.
Data acquisition.
The fabric tearing performance test experiment adopts CRE isokinetic elongation tester to carry out the trouser specimen tearing performance test experiment. The CRE isokinetic elongation tester used was an Instron 5967 double-column bench-top tester.
For fabric tearing performance testing experiments, real-time monitoring of the power usage of the fabric tear tester, including current, voltage, and power parameters, was carried out, and one experiment was composed of five sets of radial and five sets of weft tearing experiments for each fabric. During the data perception of a single tearing experiment the CRE isokinetic elongation tester can obtain the time course of the four device states: The first is the equipment standby state, waiting for the experiment to be carried out, low-power mode; followed by the equipment sample change state, the installation of the specimen, low-power mode; followed by the equipment tensile state, subject to the different characteristics of different fabrics consume different power; and finally, the equipment reset state, the deformation of different fabrics is different, so that the device’s pneumatic jig reset consumes a larger amount of power, specifically as shown in Table 1:
According to the working condition of the CRE isotropic elongation tester, the collection of electrical parameters needs to be carried out throughout the whole experimental process, especially the current power consumption in the electrical parameters is the largest when the equipment is working for stretching and it is mentioned in the international standard that for the isotropic elongation tester, if the strong force and elongation records are obtained through the data collection chip and software, then the frequency of data collection should be at least 8Hz. According to the Nyquist sampling theorem, to recover the original signal from the sampled signal without distortion, the sampling frequency should be greater than two times the highest frequency of the signal, so the design of power parameters should be collected at a frequency of not less than 16Hz, which can reflect the changes in the power load of the monitoring equipment in real-time. In the predictions of this paper, only the electric power is used as the prediction data because the electric power most clearly demonstrates the complete process of the entire fabric tearing performance testing experiment.
Environmental temperature and humidity will affect the physical properties of the fabric, in the high-temperature environment, the strength and toughness of the fabric will be reduced; in the humid environment, the softness and elasticity of the fabric will be reduced, and the fabric tearing performance testing requires a specific temperature and humidity conditions for testing. The international standard ISO 139-1973 Textiles-Standard atmospheres for conditioning and testing stipulates that the atmospheric temperature is 20.0°C with a tolerance of ±2.0% for temperature and 65.0% for relative humidity with a tolerance of ±4.0% for relative humidity. The electric power data of the experimental process of different fabric material samples were collected in a temperature and humidity range that ensures that it is a suitable temperature and humidity for doing the experiments, and the predictive analysis will be performed based on each individual experimental unit in order to improve the representativeness and generalization of the dataset.
Preprocessing
Missing value processing.
There are some missing values in the electric power parameters monitored during the experimental process of trouser fabric tearing performance testing (e.g., equipment failures, data acquisition problems, etc.), which leads to this problem may result in a decrease in prediction accuracy, and dealing with the missing values can make the data in the dataset more complete and improve the availability of the data and the reliability of the model. Due to the characteristics of serial data should not directly delete the missing values, this paper uses Last Observation Carried Forward (LOCF) to fill the later data with the data of the previous time.
Smoothing.
The use of “mean padding of front and back values” for data smoothing is conducive to the elimination of isolated noise points in the time series data and the handling of minor missing value problems, smoothing of training data, elimination of burrs, and enhancement of the consistency and stability of the data.
Data breach processing.
Data leakage is a phenomenon in which information from test sets or future data is used in the learning and training of a predictive model, causing the model to perform well in real-world applications but to lose predictive power on new data. To avoid this problem, the test set and training set need to be divided correctly. Before training the model, the original dataset is divided into a training set and a test set using a correlation random function, the model parameters are trained and adjusted by the training set, and the test set is used to evaluate the model generalization ability. The data sets were divided into the following proportions: 70%, 30%; 60%, 40%; 80%, 20%; 90%, 10% (training and test sets).
Event-driven scrolling window segmentation.
In the timing prediction task of this paper, the timing data show an overall periodic trend, but the actual duration of each cycle is not fixed. In order to solve this problem, a rolling window concept that integrates the principle of discrete-time state machine and dynamic adjustment is innovatively proposed, focusing on the prediction of the multi-step state that occurs after a “change- tensile -reset” state. Specifically, a state window W is defined, the length L of which aims to cover an expected cycle length and is used to contain all relevant information from the completion of the first “change- tensile -reset” state until the present moment. Yet while the overall trend is cyclical, the actual time span T of each cycle can fluctuate. Therefore, the update of the state window W does not follow a fixed time sequence but is dynamically adjusted according to the occurrence of the “change- tensile -reset” state, which ensures that whenever such an operation occurs, the dataset Dt contained in the window Wn is strictly limited to the complete period from the end of the previous state to the beginning of the current state.
Next, the prediction model will be trained based on the data in the window Wn to realize multi-step prediction, i.e., to predict the “change- tensile -reset” state after the current state. This process can be expressed as , where the Train function is responsible for performing the model training, and Mn is the trained model, which has the ability to perform multi-step prediction, and at the same time can adapt to the change of cycle length.
In the prediction stage, the model Mn predicts the state of “change- tensile -reset” at the moment tn + x based on the latest data D(tn), where x indicates the range of the prediction, e.g., 1 step, i.e., , where the predictx function performs the prediction of the x step. Although the cycle length T may change, by dynamically adjusting the state window Wn, the model is able to capture and learn from these changes, thus improving the accuracy and robustness of the prediction, and the specific training process of the model is detailed in “Dataset construction”.
Every time the “change- tensile -reset” state is completed, the state window scrolls to the next state point, and the model is updated, forming a closed loop of continuous iteration and optimization. This approach makes full use of the state information in the data to ensure that the model accurately captures and predicts the dynamics of the “change- tensile -reset” cycle, providing reliable multi-step predictions even in the face of irregular cycle lengths.
Hierarchical 3-fold cross-validation design.
To eliminate the accidental bias caused by a single data division, supplementary hierarchical 3-fold cross-validation was conducted: with “fabric elasticity grade” as the hierarchical variable (the total dataset includes 150 types of high-elasticity fabrics, 170 types of medium-elasticity fabrics, and 160 types of low-elasticity fabrics, see Section 4.1 for data description), each fold contains 50 types of high-elasticity fabrics, 57 types of medium-elasticity fabrics, and 53 types of low-elasticity fabrics (160 types in total per fold). The division is based on fabric types without overlap, ensuring that the test set consists of “completely unseen new samples”. For each fold, training (2 merged folds as the training set, 320 types of fabrics) and testing (1 fold as the test set, 160 types of fabrics) are performed independently, with the same parameters and independently initialized weights to avoid cross-fold interference. This design is complementary to the rolling window: the former verifies the model’s adaptability to “brand-new fabrics”, while the latter simulates the real-time scenario of “continuous addition of new data”. Together, they ensure the reliability of evaluation from two dimensions: sample diversity and temporal dynamics, and both strictly adhere to the principle that there is no information overlap between the training set and the test set.
Experiment and parameter setting
Data description
In the process of fabric tearing performance testing experiments, the equipment used is the Instron 5967 double-column bench-top tester, and in the whole experiment, the working state of the equipment can be divided into four states, which are: standby state, sample change state, tensile state, and reset state, as shown in Table 1 of “Research methods”. Because the standby state has little correlation with the fabric tearing experiment, only the remaining three states are predicted in this paper.
The power visualization time series plot of the fabric tearing performance testing experiment is shown in Fig 9 below, where the shaded portion indicates the different states in the fabric tearing performance testing experiment. Fig 9A shows the changeover state, Fig 9B shows the stretching state, and Fig 9C shows the reset state.
A:Sample change state. B:Tensile state. C:Reset state.
On the time-series data in this paper, the principle of discrete-time state machines and the concept of dynamically adjusted rolling windows are integrated, focusing on predicting the multi-step state that occurs after a “change- tensile -reset” state. Set the “change- tensile -reset” state as a whole state, every time the “change- tensile -reset” state is completed, the state window will scroll to the next state point, and the model will be updated accordingly, forming a closed loop of continuous iteration and optimization.
Electrical power data were collected from the same fabric warp and weft tear performance testing experiments at a frequency of 20 Hz. Each prediction will focus on the results of the next “change- tensile -reset” operation. As shown in Fig 10, x1 represents a “change- tensile -reset” operation, which generates sequence of temporal features as input to the model.
x1 represents a “change- tensile -reset” operation, which generates sequence of temporal features as input to the model.
Based on the power of the experimental process of tearing fabrics with three different categories of high, medium, and low elasticity, the data sets were divided into three categories, as shown in Table 2 below:
Model training and parameter setting
In this study, the experimental system is running on Windows 11 operating system with Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz processor, 16.0GB of system memory, Python version 3.9 installed in the experimental environment, and Pytorch version 1.13.1.
The BLTT-FT model proposed in this paper is first trained offline. The datasets used are datasets 1, 2 and 3 described in Table 2, taking dataset 1 as an example: the whole dataset collects the electric power acquisition signals of the experimental process of testing the tearing performance of fabrics with fabric properties of high elasticity, in chronological order. There are five warp direction specimens and five weft direction specimens for one fabric, in order to increase the generalization of the experiments, there are 150, 170 and 160 types of fabrics with high, medium and low elasticity, respectively, and a total of ten experiments are done for each type of doing warp and weft, which basically includes most of the fabric fabrics and ensures the diversity of the fabric types to achieve wider applicability. The long data segment is composed of the electric power signals collected during the experimental process of 10 specimens, the medium data segment is composed of the electric power signals collected during the experimental process of the longitudinal and latitudinal specimens, respectively, and the short data segment is composed of the electric power signals collected during the experimental process of the first group of radial and the first group of latitudinal specimens. From the above, it can be seen that each dataset has been categorized into long, medium, and short data segments, and each dataset is fed in parallel to the BLTT-FT model for offline training. In the online application, when the first set of electric power data of the experimental process is obtained, every time the “change- tensile -reset” operation is completed, the state window will scroll to the next state point, and the model will be updated accordingly, and the incremental learning will be repeated, updating the database to form a continuous iteration and optimization of the closed loop. It is used to predict the next electric power data. The specific flow is shown in Fig 11.
Flowchart of BLTT-FT model for offline training and online application.
The parameters for experimental training are as follows: epoch = 100, batch size is set to 32, the initial learning rate of Adam is set to 0.00001, the ReLU activation function is adopted, the loss function is the MSE function, and predictions are made for the ranges of 1, 4, and 9 steps respectively. The cross-validation parameters are consistent with the above, with only the data division method adjusted: the training set and test set of each fold strictly follow the hierarchical 3-fold rules, and the model parameters of each fold are initialized independently to avoid cross-fold information leakage. The parameters of the proposed model are shown in Table 3.
In the offline training phase, the BLTT-FT model completes 100 epoch training based on Windows 10 system (Intel i5-8250U CPU, 16GB RAM) with a total time of 8 hours, and the test set loss curves are highly fitted to the training set, as shown in Fig 12, indicating that the model is not overfitted. The loss curve of BLTT-FT not only decreases considerably, but also has a shorter fitting time, and the loss curve of the test set is closer to the training set, with excellent training results.
The loss function curve shows the trend of the loss value of the model during the training process, which helps to determine whether the model is overfitting or underfitting in order to adjust the model structure and training strategy.
In online application, whenever the “change - tensile - reset” operation is completed, the model realizes incremental learning by scrolling through the state window, and a single update takes about 200 ms, and the prediction error after the update is reduced by 3.2% on average compared with that before the update, which demonstrates the fast adaptability to real-time data. The performance data of the two modes are shown in Table 4.
Experimental results and discussion
Indicators for model evaluation
Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE) were used as the evaluation criteria for the method, and these error evaluation metrics were calculated as shown in Eq (8):
In the above equation, is the predicted value, yt is the actual value,
is the mean value, and n is the number of samples. Generally speaking, the smaller the values of MAE, MSE, RMSE, MAPE, and the closer R2 is to 1, the smaller the error between predicted and actual power is, indicating the better prediction performance of the model.
Comparative experiments
In order to verify the effectiveness of the BLTT-FT model in monitoring the power changes during the fabric tearing performance monitoring experiments, five common prediction models were selected for comparative analysis, which include LSTM model, BiLSTM model, Transformer model, TCN model, and Transformer-LSTM model for comparison of prediction performance. The parameters of each comparison model are shown in Table 5. And the power of the experimental process of tearing performance of fabrics with three different categories of high elasticity, medium elasticity, and low elasticity was collected within the range of atmospheric temperature of 20.0-22.0°C and relative humidity of 61.0-69.0% and was divided into three types of datasets, in which the electrical power data of the experimental process was predicted for the warp and weft fabrics of the same category of fabrics. Considering that the sample data is relatively limited, the ratio of 9:1 is used to divide the training set and test set to ensure that the model can be adequately trained.
The sequence data of each data segment in the dataset were input into the model for training, and the loss function descent curves of the training and test sets during the training of the five models were obtained as shown in Fig 13.
A: LSTM’s loss curve. B: BiLSTM’s loss curve. C: Transformer’s loss curve. D: TCN’s loss curve. E: Transformer-LSTM’s loss curve.
As seen in Fig 13, in LSTM, BiLSTM, Transformer, and TCN models, the training set loss curves still tend to decrease slightly in the late iterations, while the loss curves of the test set slightly increasing, which may lead to overfitting phenomenon if the number of iterations increases. However, the Transformer-LSTM model’s loss curve does not have this phenomenon, and the validation set loss function decreases more than the LSTM, BiLSTM, Transformer, and TCN models, but the fitting time is relatively long. From Fig 12, it can be seen that the loss curve of BLTT-FT not only decreases more, but also has a shorter fitting time, and the test set loss curve is closer to the training set, which is a better training result.
As can be seen in Tables 6, 7, 8, in datasets 1, 2, and 3: the average RMSE for 1-step prediction is 0.0691, the average MAE is 0.0467, the average MAPE is as low as 2.31%, and the average R2 was as high as 0.9865; the average RMSE for the 4-step prediction is 0.0911, the average MAE is 0.0632, the average MAPE is as low as 3.12%, and the average R2 is as high as 0.9600; the average RMSE for the 9-step prediction is 0.1040, the average MAE is 0.0729, the average MAPE is as low as 3.74%, and the average R2 is as high as 0.9252. In these three datasets, the proposed prediction method achieved high prediction accuracy for the prediction of electric power parameters during the experimental process of fabric tearing performance testing for different elasticity levels. As the prediction time step increases, the accuracy of the BLTT-FT based method will decrease but still maintain a high prediction accuracy. Moreover, the proposed method uses only the previous 1-step electric power data to achieve high-precision prediction for the next 9 steps. These results show that the BLTT-FT-based method has excellent performance and strong generalization ability in the prediction of electric power parameters during fabric tearing experiments with different elasticity levels.
The multi-step prediction errors of BLTT-FT in in Tables 6, 7, 8 above are visualized as shown in Fig 14A, 14B, 14C.The smaller the values of MAE, MSE, RMSE, MAPE, and the closer the R2 is to 1, the smaller the error between the predicted power and the actual power is, which indicates that the prediction performance of the model is better. It can be seen that the BLTT-FT model in this paper has excellent prediction effect in the three datasets and stable prediction effect in different datasets.
A: Multi-step error visualization plot for dataset 1. B: Multi-step error visualization plot for dataset 2. C: Multi-step error visualization plot for dataset 3.
In datasets 1, 2, and 3, the comparison of the prediction curves of the BLTT-FT based method with the actual curves are shown in Figs 15, 16, 17, demonstrating the comparison of the prediction results of different models with different prediction ranges. Figs 15A, 15B, 16A, 16B, 17A, 17B show the prediction of longitude and latitude fabrics at 1 step, Figs 15C, 15D, 16C, 16D, 17C, 17D show the prediction of longitude and latitude fabrics at 4 steps, and Figs 15E, 16E, 17E show the prediction of a whole set of fabrics at 9 steps. In these datasets, from 1 step prediction to 9 steps prediction, the error between the predicted curve and the actual curve increases as the prediction range increases, because the larger the prediction range, the greater the amount of missing information and the lower the prediction accuracy. However, the proposed BLTT-FT based method still has a high fit between the actual and predicted curves in steps 1, 4, and 9.
A: Longitudinal direction at step 1. B: Latitudinal direction at step 1. C: Longitudinal direction at step 4. D: Latitudinal direction at step 4. E: Overall at step 9.
A: Longitudinal direction at step 1. B: Latitudinal direction at step 1. C: Longitudinal direction at step 4. D: Latitudinal direction at step 4. E:Overall at step 9.
A: Longitudinal direction at step 1. B: Latitudinal direction at step 1. C:Longitudinal direction at step 4. D: Latitudinal direction at step 4. E: Overall at step 9.
Based on the above analysis, the proposed model shows excellent performance in terms of prediction accuracy and stability, especially in the long-term prediction task.
As shown by the prediction graphs and evaluation indexes with prediction ranges of 1, 4, and 9 steps, the BLTT-FT prediction model proposed in this paper is excellent, and it can realize the conception of doing only the first group of tearing experiments in fabric tearing experiments to predict the fabric tearing experiments of the following groups, simplify the experimental process, and improve the efficiency and accuracy of the experiments.
Significance test analysis
In the performance assessment of prediction models, it is difficult to determine whether the differences between different datasets are statistically significant by only comparing the numerical magnitude of the prediction error metrics. In order to scientifically assess the performance of the model on different datasets, this study analyzes the significance test of MAE, RMSE and MAPE metrics for three datasets under one-step, four-step and nine-step prediction. The prediction error metrics of different datasets under each prediction step are visualized by box-and-line diagrams to visually present the data distribution characteristics. Figs 18, 19 and 20 show the distributions of MAE, RMSE and MAPE for different datasets under each prediction step, respectively.
The figure shows the distribution of MAE at steps 1, 4 and 9 for the three datasets.
The figure shows the distribution of RMSE at steps 1, 4 and 9 for the three datasets.
The figure shows the distribution of MAPE at steps 1, 4 and 9 for the three datasets.
From Fig 18, it can be seen that the median MAE of dataset 2 is relatively low in one-step prediction; in four-step prediction, dataset 2 still shows a low median MAE; and in nine-step prediction, the median MAE of dataset 2 is also at a low level. This indicates that the prediction error of dataset 2 is relatively small and the prediction result is more accurate in terms of MAE. At the same time, the degree of data dispersion (box height and whisker length) varies among datasets at different prediction steps, reflecting the differences in the stability of prediction errors. For example, dataset 3 has a larger box height at nine prediction steps, which indicates that it has a high degree of data dispersion and poorer stability of prediction error.
Observing Fig 19, the median RMSE of dataset 3 is the lowest in one-step prediction, and the median RMSE of dataset 3 is relatively low in four-step prediction. In nine-step prediction, the median RMSEs of dataset 1, dataset 2, and dataset 3 are relatively close to each other, but there are high outliers in dataset 2 and low outliers in dataset 3. This indicates that dataset 3 has some advantages in RMSE indicators, especially in one-step prediction, and the prediction error is relatively small, but in nine-step prediction, the performance difference of each dataset is not obvious and there are outliers.
As shown in Fig 20, the median MAPE of dataset 2 is mostly the lowest under each prediction step. For one-step prediction, the median MAPE of dataset 2 is significantly lower than that of other datasets; for four- and nine-step prediction, dataset 2 also maintains a low median MAPE. This indicates that the prediction error of dataset 2 is relatively small under the MAPE indicator, and the model’s prediction accuracy of the data is better in this indicator dimension.
One-way analysis of variance (ANOVA) was used to test the significance of the MAE, RMSE and MAPE indicators for different datasets at each prediction step, and the test results are summarized in Table 9.
The ANOVA results show that there are significant differences (p-value less than 0.05) between different datasets at each prediction step (1, 4, and 9 steps) under the three indicators of MAE, RMSE, and MAPE. This indicates that statistically speaking, the prediction errors of different datasets at each prediction step are not caused by random factors, but there are real and significant differences, and the F-value reflects the ratio of the between-group variance to the within-group variance, the larger the F-value is, the larger the degree of difference between the different datasets is compared with the degree of difference within the datasets, which further supports the conclusion that the datasets have significant differences in the corresponding indexes and the prediction steps. The
A one-way ANOVA was conducted to test the MAE, RMSE and MAPE indicators of different samples under each prediction step, and the results showed that there were extremely significant differences among different samples under all indicators and prediction steps.
The significance of MAE, RMSE and MAPE indicators of the three datasets under 1, 4 and 9 steps of prediction is tested by ANOVA, and the results show that there are extremely significant differences in all indicators and prediction steps, indicating that the differences between different datasets are not random. From the box plot visualization results, the corresponding dataset of this algorithm has outstanding performance in the error indicators, such as the MAE value is significantly lower than other datasets under most prediction steps. Combined with the significant results of ANOVA, it strongly proves that the present algorithm is significantly different from other methods in terms of prediction accuracy, and more superior in terms of statistical significance. At the same time, the information of box location and degree of dispersion of the corresponding dataset boxes of this algorithm in the box-line diagram further aids in arguing its superiority in prediction accuracy and stability from the perspective of intuitive error distribution, which provides a solid data and statistical basis for the validity and reliability of this algorithm in practical applications.
Ablation experiment
In order to comprehensively verify the performance contribution of each module in the BLTT-FT model, this paper investigates it through a hierarchical progressive ablation experiment design. First, focusing on the necessity of the core modules, Bi-LSTM, Transformer and TCN are removed for comparison respectively: by comparing BLTT-FT with Trans-TCN, and Bi-LSTM-TCN with TCN, we verify the key role of the Bi-LSTM module in capturing bi-directional sequence dependency; Highlighting the global attention advantage of the Transformer module in long-distance dependency modeling by comparing BLTT-FT with Bi-LSTM-TCN and Trans-TCN with TCN; Comparing BLTT-FT with Bi-LSTM-Transformer and Bi-LSTM-TCN with Bi-LSTM, the effectiveness of the TCN layer for local temporal feature extraction is clarified. Secondly, to verify the synergistic effect of multi-module combination, BLTT-FT is compared with single-module Transformer, Bi-LSTM and TCN, respectively, to analyze the overall improvement effect of multi-module fusion in enhancing the prediction performance. The model naming is shown in Table 10, and the experimental results are shown in Tables 11, 12 and 13.
Based on the results of the ablation experiments on the three datasets, the BLTT-FT model demonstrates significant performance advantages over single-module and two-module combination models in different prediction steps (1, 4, and 9 steps), which verifies the key role of multi-module synergy in capturing multi-step timing features. In the 1-step prediction of Dataset1, the MAE of the full model (0.0489 mm) is reduced by 57.9%, 53.4%, and 50.3% compared with Trans-TCN (0.1162 mm), Bi-LSTM-TCN (0.1051 mm), and Bi-LSTM-Transformer (0.0984 mm), respectively, showing that the immediate effectiveness of bidirectional temporal modeling with local feature extraction; while in 9-step prediction, the MAE of the full model (0.0762 mm) is 57.0% lower than that of Trans-TCN (0.1775 mm), Bi-LSTM-TCN (0.1452 mm), and Bi-LSTM-Transformer (0.1112 mm), respectively, 47.5%, and 31.5%, respectively, highlighting the irreplaceable nature of the global attention mechanism for long-distance dependent modeling.
In Dataset2, with the extension of the prediction step from 1 to 9 steps, the RMSE of the full model increases from 0.0865 mm to 0.1079 mm, which is only 24.7%, while the RMSE of Trans-TCN increases from 0.1608 mm to 0.2703 mm, which is 68.8%, indicating that the full model effectively suppresses the long series through the collaboration of multiple modules error accumulation in prediction. Similarly, in the 9-step prediction of Dataset3, the R2 of the complete model (0.9124) is increased by 24.1%, 22.4% and 25.6% compared with that of Bi-LSTM (0.7352), Transformer (0.7457) and TCN (0.7267), which verifies the advantages of bi-directional temporal modeling and global-local feature fusion in the modeling of long and complex series. modeling advantages.
Cross-dataset comparisons show that BLTT-FT consistently maintains the lowest error and highest accuracy in multi-step prediction regardless of the prediction step length, e.g., in the 4-step prediction of Dataset2, its MAPE (2.26%) is reduced by 69.5% compared with that of the single-module TCN (7.42%), and in the 9-step prediction of Dataset3, its MAE (0.0791 mm) is only 35.9% of the single-module Transformer (0.2206 mm).
The experimental results show that the multi-module combination effectively solves the problems of detail accuracy in single-step prediction and long-range dependence in multi-step prediction through the hierarchical modeling of bidirectional time-series capturing, global attention correlation, and local feature extraction, and its synergistic effect is especially significant in multi-step prediction scenarios, which provides a more optimal solution for multi-scale prediction of time-series data.
Conclusion
This research is oriented to the field of textile manufacturing and intelligent quality control, targeting the technical bottlenecks of the traditional experimental monitoring means for fabric tearing performance testing, and making innovative breakthroughs at the level of theoretical methods and engineering applications. The research proposes a BLTT-FT hybrid neural network framework, which realizes the deep modeling of complex timing features through the mechanism of multi-component synergy: Bi-LSTM captures the bi-directional temporal dependencies of electric power sequences, models the full-sequence long-distance feature associations with the self-attention mechanism of the Transformer coding layer, and extracts the local hierarchical features of variable-length sequences and reduces the number of parameters of the model by combining with the temporal convolutional network with residual connections, which significantly improves the model generalization ability and training efficiency. The architecture breaks through the traditional single model in complex sequences. The architecture breaks through the limitations of the traditional single model in modeling complex temporal dependencies and cross-scale feature interactions, and constructs a collaborative prediction framework that integrates sequence modeling, global attention mechanism and causal convolution.
In the multi-step prediction performance validation, the model demonstrates significant technical advantages, especially in the 9-step prediction scenario.Compared with other models, the proposed model performs well in real-time trend prediction for datasets constructed with different elasticity level fabrics and still maintains a low error level in the long prediction range, with an average RMSE of 0.0881 for multi-step prediction, an average MAE of 0.0609 for multi-step prediction, an average MAPE as low as 3.06% for multi-step prediction, and an average coefficient of determination (R2) of multi-step prediction as high as 0.9572. Analysis of variance (ANOVA) shows that the same model has highly significant differences in MAE, RMSE, and MAPE metrics on different datasets and at each prediction step (p < 0.001), and the gradient degradation problem in deep network training is effectively avoided by the residual TCN structure. In addition, the study realizes for the first time the engineering integration of deep learning model and multi-source data-driven situational awareness system, and develops a real-time prediction module for fabric tearing experiments based on electric power parameters, which can complete the prospective prediction of up to 9 groups of subsequent experimental results with a single set of experimental data, reduce the repetition of experimental processes, and improve the detection efficiency. Ablation experiments show that the multi-module combination effectively solves the problems of detail accuracy of single-step prediction and long-range dependence of multi-step prediction through the hierarchical modeling of bidirectional time-series capture, global attention correlation, and local feature extraction, and the synergistic effect is especially significant in multi-step prediction scenarios, which provides a more optimal solution for multi-scale prediction of time-series data.
The result constructs a closed-loop solution from industrial data to intelligent decision-making, provides a technical paradigm with engineering applicability for intelligent quality control in textile manufacturing, and promotes the transformation of traditional experimental process to data-driven intelligent prediction mode.
Shortcomings and prospects
Although this study has made significant progress in the field of real-time monitoring and prediction of fabric tearing experiments, this area is still full of opportunities and challenges, providing rich directions for future research. First, we plan to deepen the optimization of the model to further enhance the prediction accuracy and generalization ability of the model by integrating more types of sensor data, such as temperature, humidity and other environmental parameters. Because fabrics are easily affected by temperature and humidity during the experimental process and their physical properties will be changed, it is necessary to include environmental influences in future research to reduce the uncertainty impact of external factors on the experiment. In this paper, when the data preprocessing, only the noise signal of the sensor is smoothed, and more advanced signal filtering technology should be used in the subsequent research, we plan to introduce adaptive filtering algorithm, which can dynamically adjust the filter parameters according to the real-time characteristics of the noise signal, so as to remove the noise interference more efficiently, and at the same time, in the phase of data acquisition, increase the signal shielding measures to reduce the electromagnetic interference. Meanwhile, at the data acquisition stage, signal shielding measures are added to reduce the impact of electromagnetic interference. At the same time, we will also work on applying the BLTT-FT model to other key aspects of textile production, such as the optimization of experiments on stretching and pilling, as well as the prediction of faults and maintenance of textile machinery, with a view to upgrading the entire production chain in an intelligent way.
Acknowledgments
We would like to give special thanks to Yu Feng for her help in the data collection process.
References
- 1. Zannat A, Uddin MN, Mahmud ST, Prithu PSS, Mia R. Review: Textile-based soft robotics for physically challenged individuals. J Mater Sci. 2023;58(31):12491–536.
- 2.
Mishsra R, Jamshaid H. Technical applications of knitted fabrics. Knitting science, technology, process and materials: a sustainable approach. Cham: Springer; 2024. p. 181–203.
- 3. Yuan Y, Zhang Q, Luo X, Gu R. Experimental study on the mechanical properties of the reinforced transparent building membrane material STFE. Construction and Building Materials. 2023;409:133849.
- 4.
Giyasova D, Sadullaeva D, Kazakova D. Research of increase the strength of warp yarns for knitting strong fabrics. In: AIP Conference Proceedings. 2021.
- 5. Yang T, Yi X, Lu S, Johansson KH, Chai T. Intelligent manufacturing for the process industry driven by industrial artificial intelligence. Engineering. 2021;7(9):1224–30.
- 6. Kong L, Peng X, Chen Y, Wang P, Xu M. Multi-sensor measurement and data fusion technology for manufacturing process monitoring: a literature review. Int J Extrem Manuf. 2020;2(2):022001.
- 7. Safari Gorjan E, Ezazshahabi N, Mousazadegan F. Study on the tearing behaviour of woven shirting fabrics – the effect of yarn and fabric properties. IJCST. 2020;33(3):353–63.
- 8. Gao L, Che L. Data monitoring for indirect metering terminal of membrane gas measure device based on sensor network. J meas eng. 2023;11(2):228–38.
- 9. Liu J, Cao X, Zhou H, Li L, Liu X, Zhao P, et al. A digital twin-driven approach towards traceability and dynamic control for processing quality. Advanced Engineering Informatics. 2021;50:101395.
- 10. Alfian G, Syafrudin M, Farooq U, Ma’arif MR, Syaekhoni MA, Fitriyani NL, et al. Improving efficiency of RFID-based traceability system for perishable food by utilizing IoT sensors and machine learning model. Food Control. 2020;110:107016.
- 11. Kuntoğlu M, Aslan A, Pimenov DY, Usca ÜA, Salur E, Gupta MK, et al. A review of indirect tool condition monitoring systems and decision-making methods in turning: critical analysis and trends. Sensors (Basel). 2020;21(1):108. pmid:33375340
- 12. Pazikadin AR, Rifai D, Ali K, Malik MZ, Abdalla AN, Faraj MA. Solar irradiance measurement instrumentation and power solar generation forecasting based on Artificial Neural Networks (ANN): A review of five years research trend. Science of The Total Environment. 2020 May 1;715:136848.
- 13. Huang J, Jiao Q, Zhang Y, Xu G, Wang L, Yue D. Fabric tearing performance state perception and classification driven by multi-source data. PLoS One. 2024;19(4):e0302037. pmid:38625923
- 14. Li N, Huang J, Feng Y. Human performance modeling and its uncertainty factors affecting decision making: a survery. Soft Comput. 2020;24(4):2851–71.
- 15. Gharibeh HF, Yazdankhah AS, Azizian MR. Energy management of fuel cell electric vehicles based on working condition identification of energy storage systems, vehicle driving performance, and dynamic power factor. Journal of Energy Storage. 2020;31:101760.
- 16. Rani U, Dahiya N, Kundu S, Kanungo S, Kathuria S, Rakesh SK, et al. Deep learning–based urban energy forecasting model for residential building energy efficiency. Energy. 2024;288:129618.
- 17. Ahirwar M, Behera BK. Prediction of tear strength of bed sheet fabric using machine learning based artificial neural network. The Journal of the Textile Institute. 2024;115(1):22–8.
- 18.
Hossain MM, Alimuzzaman S, Ahmed DM. Tear and tensile strength of 100% cotton woven fabrics’ basic structures: regression modelling.
- 19.
Ribeiro R, Pilastri A, Moura C, Rodrigues F, Rocha R, Morgado J, et al. Predicting physical properties of woven fabrics via automated machine learning, textile design, finishing features. In: Artificial Intelligence Applications, Innovations: 16th IFIP WG 12.5 International Conference and AIAI 2020, Neos Marmaras, Greece, June 5–7, 2020, Proceedings, Part II. 2020. p. 244–55.
- 20. Xiao Q, Wang R, Zhang S, Li D, Sun H, Wang L. Prediction of pilling of polyester–cotton blended woven fabric using artificial neural network models. Journal of Engineered Fibers and Fabrics. 2020;15.
- 21. Tu Y-F, Kwan M-Y, Yick K-L. A systematic review of AI-driven prediction of fabric properties and handfeel. Materials (Basel). 2024;17(20):5009. pmid:39459715
- 22. Sarkar J, Prottoy ZH, Bari MT, Al Faruque MA. Comparison of ANFIS and ANN modeling for predicting the water absorption behavior of polyurethane treated polyester fabric. Heliyon. 2021;7(9).
- 23. Doran EC, Sahin C. The prediction of quality characteristics of cotton/elastane core yarn using artificial neural networks and support vector machines. Textile Research Journal. 2019;90(13–14):1558–80.
- 24. Guo N, Chen W, Wang M, Tian Z, Jin H. Appling an improved method based on ARIMA model to predict the short-term electricity consumption transmitted by the Internet of Things (IoT). Wireless Communications and Mobile Computing. 2021;2021(1):6610273.
- 25. Xie Y, Jin M, Zou Z, Xu G, Feng D, Liu W, et al. Real-time prediction of docker container resource load based on a hybrid model of ARIMA and triple exponential smoothing. IEEE Transactions on Cloud Computing. 2020;10(2):1386–401.
- 26. Guo Y, Li Y, Qiao X, Zhang Z, Zhou W, Mei Y, et al. BiLSTM multitask learning-based combined load forecasting considering the loads coupling relationship for multienergy system. IEEE Trans Smart Grid. 2022;13(5):3481–92.
- 27. Wu K, Peng X, Chen Z, Su H, Quan H, Liu H. A novel short-term household load forecasting method combined BiLSTM with trend feature extraction. Energy Reports. 2023;9:1013–22.
- 28. Wu H, Meng K, Fan D, Zhang Z, Liu Q. Multistep short-term wind speed forecasting using transformer. Energy. 2022;261:125231.
- 29. Qu K, Si G, Shan Z, Kong X, Yang X. Short-term forecasting for multiple wind farms based on transformer model. Energy Reports. 2022;8:483–90.
- 30. Reza S, Ferreira MC, Machado JJM, Tavares JMRS. A multi-head attention-based transformer model for traffic flow forecasting with a comparative analysis to recurrent neural networks. Expert Systems with Applications. 2022;202:117275.
- 31. Guo J, Wang Z, Li H, Yang Y, Huang C-G, Yazdi M, et al. A hybrid prognosis scheme for rolling bearings based on a novel health indicator and nonlinear Wiener process. Reliability Engineering & System Safety. 2024;245:110014.
- 32. Lu P, Ye L, Pei M, Zhao Y, Dai B, Li Z. Short-term wind power forecasting based on meteorological feature extraction and optimization strategy. Renewable Energy. 2022;184:642–61.
- 33. Liu S, Xu T, Du X, Zhang Y, Wu J. A hybrid deep learning model based on parallel architecture TCN-LSTM with Savitzky-Golay filter for wind power prediction. Energy Conversion and Management. 2024;302:118122.
- 34. Zhang G, Zhang Y, Wang H, Liu D, Cheng R, Yang D. Short-term wind speed forecasting based on adaptive secondary decomposition and robust temporal convolutional network. Energy. 2024;288:129618.