Figures
Abstract
Food safety problems are becoming increasingly severe in modern society, and establishing an accurate food safety risk warning and analysis model is of positive significance in avoiding food safety accidents. We propose an algorithmic framework that integrates the analytic hierarchy process based on the entropy weight (AHP-EW) and the autoencoder-recurrent neural network (AE-RNN). Specifically, the AHP-EW method is first used to obtain the weight percentages of each detection index. The comprehensive risk value of the product samples is obtained by weighted summation with the detection data, which is used as the expected output of the AE-RNN network. The AE-RNN network is constructed to predict the comprehensive risk value of unknown products. The detailed risk analysis and control measures are taken based on the risk value. We applied this method to the detection data of a dairy product brand in China for example validation. Compared with the performance of 3 models of the back propagation algorithm (BP), the long short-term memory network (LSTM), and the LSTM based on the attention mechanism (LSTM-Attention), the AE-RNN model has a shorter convergence time, predicts data more accurately. The root mean square error (RMSE) of experimental data is only 0.0018, proving that the model is feasible in practice and helps improve the food safety supervision system in China to avoid food safety incidents.
Citation: Zhong J, Sun L, Zuo E, Chen C, Chen C, Jiang H, et al. (2023) An ensemble of AHP-EW and AE-RNN for food safety risk early warning. PLoS ONE 18(4): e0284144. https://doi.org/10.1371/journal.pone.0284144
Editor: Xingwei Li, Sichuan Agricultural University, CHINA
Received: October 11, 2022; Accepted: March 25, 2023; Published: April 26, 2023
Copyright: © 2023 Zhong et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting information files.
Funding: This research was supported by the Major science and technology projects of Xinjiang Uygur Autonomous Region (2020A03001 and sub-program 2020A03001-3), the project of Xianyang Science and Technology Bureau (021ZDYF-SF-0005) and the special scientific research project for young medical science (2019Q003). There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Food safety is an essential indicator in evaluating the development of a country’s livelihood. In recent years, food contamination incidents have occurred frequently, and the crisis of public confidence in food safety has become more and more serious. Large-scale food contaminants will cause serious public health events, pose a significant threat to people’s health, and have far-reaching effects on society and the economy [1]. According to WHO, the 31 foodborne hazards counted in 2010 alone resulted in about 600 million cases of illness, 420,000 deaths, and 33 million disabilities [2]. In addition, the lack of nutritional elements in food affects the health of the national public. In China, food safety incidents have also occurred frequently, such as the melamine milk powder incident [3], the “clenbuterol” food poisoning incident [4], and the gutter oil incident [5]. Frequent food safety accidents have exposed the defects in China’s food safety inspection system [6]. In addition to the fact that China’s food regulatory foundation has always been relatively weak compared to developed countries, China’s food safety regulatory system urgently needs reform.
In order to prevent the recurrence of food safety incidents, supervision departments should use scientific data analysis tools for risk assessment and prediction to ensure food quality [7]. We propose a food safety risk early warning model combining the analytic hierarchy process based on the entropy weight (AHP-EW) with the autoencoder-recurrent neural network (AE-RNN). It can check the contaminant and nutrient factors affecting food safety and accurately predict comprehensive food risk values. The main contributions are as follows.
- This paper proposes an AHP-EW combined with AE-RNN for food safety early warning method to achieve risk prediction and early warning analysis efficiently and rapidly. In this paper, we adopt an improved hierarchical analysis process to obtain the risk value using a data-driven approach to information fusion of testing indexes, eliminating the human intervention of expert evaluation. According to the characteristics of massive and high-dimensional food testing data, the AE-RNN model is established for accurate risk prediction. Overall, intelligent end-to-end risk prediction and early warning of food testing data are achieved.
- Quantitative analysis of the risk of qualified products and the defining of food safety warning thresholds for qualified products. For food testing data, the regulatory authorities determine whether the test samples are unqualified according to national standards is only the minimum standard. Qualified products also have certain risks. In this paper, we integrate the testing data of contaminant and nutrient indexes to quantify the risk of testing samples. By defining the risk threshold to classify qualified samples, it is beneficial for manufacturers and regulatory authorities to take appropriate measures to minimize the probability of food safety accidents.
- A large and sufficient experimental validation with real data sets shows that our method is more advantageous. Our model is compared with the mainstream machine learning model for risk prediction on the actual testing data of dairy products in a Chinese enterprise. The experimental results show that the error of data prediction of this paper model is reduced from 0.009 to 0.0018, and the time cost is reduced from 244s to 14s, which proves that this paper model can efficiently and accurately screen out the testing samples with food safety problems in dairy products.
Our model can help supervision departments quickly screen out unqualified samples, trace the root cause of high-risk foods’ existence, guide manufacturers to standardize production management correctly [8], and urge them to formulate risk mitigation strategies [9]. It effectively reduces the occurrence of food safety incidents and safeguards the food safety of Chinese society.
Related work
The basis for ensuring food safety is a scientific and reasonable risk analysis based on food detection data. Early researchers tried to use mathematical statistics, informatics and other methods to fully explore the intrinsic characteristics and connections of food detection data, such as control chart analysis [10], association rule mining [11], and signal analysis [12]. However, these methods can only make predictions on the state of food safety and cannot quantify specific risks.
The evaluation and early warning methods for food safety risk are continuously developed to conduct detailed risk analysis on food data, such as Bayesian modeling based on mathematical statistics [13], fuzzy integrated evaluation method [14], gray correlation analysis method [15], and artificial neural network-based methods. Specifically, Bouzembrak and Marvin used a constructed Bayesian model to classify food fraud reports and predict the type of food fraud [13]. Shen et al. proposed a comprehensive risk assessment model for import and export food by combining the rough set theory with the conventional fuzzy comprehensive assessment method [14]. Lin et al. proposed an improved interpretive structure modeling (ISM) method based on gray correlation analysis (GRA), which performs a hierarchical analysis of factors affecting food safety [15].
However, all statistical modeling methods rely on expert experience and lack objectivity. Moreover, food detection data are characterized by high dimensionality and complexity, and the regression prediction effect based on statistical analysis is unstable. In contrast, the artificial neural network (ANN), a parallel distributed information processing structure with flexible and fast learning algorithms, diverse network topologies and high fault tolerance, is a powerful tool for data analysis [16]. Based on the potential ability of ANNs to efficiently and accurately process complex data, many scholars have applied related networks to predict food safety risks. Zhang et al. constructed a food safety early warning model based on back propagation (BP) neural network and conducted preliminary validation [17]. However, their work could only make an overall risk assessment on monthly food data and could not be precise to individual samples. Moreover, the BP algorithm converged slowly, and as an optimization method of local search, its training process was prone to fall into local optimal value [18]. Mao et al. proposed a credit evaluation system in the food supply chain based on a combination of long short-term memory (LSTM) network and blockchain to assess the food safety quality status [19]. However, this system can only predict the “negative” and “positive” food safety status and lacks deeper risk analysis. Geng et al. proposed a neural network based on the deep radial basis function (RBF) to build a food risk warning model [20]. They then combined the LSTM network with the method of data fuzzy transformation to fuse the data of various food detection indexes to obtain the comprehensive risk value [21]. However, the two models have their defects. For the RBF network, the k-means method randomly determines the original cluster center, making the clustering results vary with the initial position and easily fall into local optimal solutions. The data fuzzification algorithm used by the latter can not accurately locate a high-risk sample due to the setting of the data interval. Multiple test samples must be checked for screening, which brings unnecessary trouble to the later risk positioning and wastes regulatory resources. Moreover, the performance of LSTM is still a thorny problem for a more considerable amount of data or longer sequences.
In contrast, we use autoencoders for pre-training deep neural networks [22] to obtain an efficient representation of the data, which makes up for the shortcomings of previously used ANN models in handling complex data. The RNN differs from standard neural networks such as the BP and the RBF. It allows the output of the hidden layer neurons to be fed back as input to the neurons. In this way, the RNN has a memory function [23]. Compared with BP and RBF, RNN can find the sequence relationship between samples, so it is the first choice model to deal with serial data. The food detection data used in this paper has the characteristics of time series. That is, the data change is continuous over some time. The data is not independent, and the data of a particular period will impact future data. Therefore, RNN can show better prediction performance for food inspection data trained in time series, addressing the shortcomings of insufficient generalization ability caused by easy falling into the optimal local solution of the above model.
As an objective method for assigning weighting factors, entropy-based methods use information entropy to determine the dispersion of indicators and calculate weights and have been widely used in many fields, such as uncertainty measure [24], safety evaluation [25], and decision problems [26]. We combine the entropy weight method and analytic hierarchy process to obtain the comprehensive risk value of food inspection data. This method not only overcomes the shortcomings of the analytic hierarchy process that does not have an objective basis [27], but also makes up for the shortcomings of the previous algorithm that cannot precisely locate high-risk samples resulting in wasted regulatory resources.
Based on the above, we obtain the comprehensive risk values of food inspection data through AHP-EW, use AE-RNN to achieve feature extraction and conduct two-stage regression training. It compensates for the defects of the previously used models in complex data processing and falling into local optimal solutions and obtains more accurate risk prediction results.
Integrated food safety risk warning system based on APH-EW fusion AE-RNN
Problem statement
Given food detection data X ∈ Rn×m wherein n is the number of test samples and m is the number of indexes, the objective is to learn an evaluation function f(⋅) according to a batch of detection sample so as to calculate the comprehensive risk value yi = f(xi) of a new sample. The comprehensive risk value yi represents the possibility of food safety problems in the sample xi. The unqualified samples and the qualified samples with higher risk can be screened out through the set unqualified risk threshold and the high-risk threshold [28].
System overview
The flow chart of the food safety risk pre-warning and control system combing APH-EW with AE-RNN is shown in Fig 1. This risk pre-warning and control system is divided into risk assessment, model construction and risk prediction.
Firstly, the detection data is preprocessed in the risk assessment. The weight of each risk index is calculated by the AHP-EW algorithm, which is multiplied and added with the detection data to obtain the comprehensive risk value of the detection sample as the expected output value of AE-RNN. Secondly, the detection data are divided into training and testing sets in the model construction. The network parameters of AE-RNN are debugged for network training. The AE-RNN model is established, and its prediction performance is evaluated compared to the baseline models. Finally, in the risk prediction, the AE-RNN risk prediction model is used for the risk prediction of new testing samples. The integrated flow chart including both AHP-EW and AE-RNN methods is shown in Fig 2. The threshold is determined by the initial risk value obtained from the risk assessment session. The unqualified samples and qualified samples exceeding the threshold of high-risk value are screened out and reported to the supervision departments, which facilitate regulatory authorities to carry out further risk analysis, trace the specific unqualified indexes and take timely regulatory and preventive measures.
Data preprocessing
According to the technical requirements of the national food safety standard for sterilized milk (GB 25190–2010), the raw material requirements, sensory requirements, physical and chemical indexes, contaminant limits, mycotoxin limits and microbial limits of dairy products need to meet the content and standards specified by the state. However, the test results of each index are characterized by complexity and different types of data, where the descriptive text cannot even be quantified [29]. We selected lactose, acidity, non-fat milk solids, fat, protein and aflatoxin M1 as the risk evaluation indexes to ensure the validity of the test data in risk prediction. Due to the test data’s different dimensions and prescribed limits for each index, we normalized the test data into dimensionless data to avoid interference with calculating comprehensive risk value [30].
As lactose and aflatoxin have maximum limits, i.e., positive indexes, the higher the value of the index, the greater the potential food safety risk. Non-fat milk solids, fat, and protein indexes have minimum limits, i.e., negative indexes. The lower the detection value, the greater the potential food safety risk. Acidity is limited by interval value, i.e., an interval index. The potential risk is greater if the value is too high or low. For the characteristics of the above indexes, we designed the following data preprocessing methods, as shown in Table 1. In previous studies, interval limit indexes like acidity were categorized as positive or negative for data preprocessing. We propose a pioneering data preprocessing algorithm for interval indexes to quantify it more scientifically and obtain reasonable comprehensive risk values for practical risk evaluation.
Wherein, xmn is the detection value of the nth index in the mth sample, ,
,
are the minimum value, the maximum value and the average value of the nth index in all samples, respectively. It unifies the numerical dimension of each index and eliminates the influence of different data dimensions on the comprehensive risk value by the preprocessing methods.
The analytic hierarchy process based on the entropy weight
The analytic hierarchy process is a multi-criteria decision-making method widely used in resource allocation and conflict resolution [31]. However, this method mainly judges the relative importance of each index based on experts’ personal experience, which lacks a certain degree of objectivity. Inspired by the work of [32], this paper adopts the combination of entropy weight method and analytic hierarchy process, takes the value of the data itself as the objective basis, and fuses each index through AHP-EW to obtain a comprehensive risk value.
For the mth sample in the input data, the standard correlation function fmn(x) of its nth index is obtained from Eq (1).
(1)
Wherein, xn(1), xn(2) and xn(3) are the minimum, average, and maximum values of the nth index, respectively.
Assuming that the data after preprocessing is X = [X(1), X(2), X(3), …, X(m)], we obtain the matrix Um×n using the standard correlation function described above.
(2)
Normalize each value of the matrix Um×n.
(3)
Wherein, ,
.
Then the negative numbers in the matrix are changed to positive numbers by the formula , wherein,
.
The positive matrix Vm×n is obtained from the above.
(4)
The m-dimensional symmetric matrix is obtained from Eq (5).
(5)
The entropy represents the amount of information provided by the data. The basic idea of the entropy weight method is to determine the weight based on the variability of the index. Generally speaking, the larger the differences among the values of an index, the lower the value of information entropy, which suggests that the index provides a higher amount of useful information, which renders a higher weight to the index [33]. The em of each sample was obtained from Eq (6).
(6)
The data weight hm for each sample was obtained from Eq (7) [34].
(7)
The information value of each index is obtained from Eq (8).
(8)
(9)
The weight vector is W = [w1, w2, w3 …wq]T, and the fused data Y is obtained from Eq (10).
(10)
Algorithm 1 AHP-EW
Require: The test data X
Ensure: The fused data Y
1: Preprocess the data X = [X(1) X(2) X(3) …X(m)];
2: Calculate the matrix Um×n using the standard correlation function fmn(x) via Eq (1);
3: for m = 1 : p do
4: for n = 1 : q do
5: via Eq (3);
6: ;
7: end for
8: end for
9: matrix D←VVT via Eq (5);
10: for m = 1 : p do
11: em←, hm←
via Eqs (6) and (7);
12: end for
13: c←XTh via Eq (8);
14:wq← via in Eq (9);
15: Y←XW via in Eq (10);
16: Produce the fused data ;
Construction of AE-RNN model
Autoencoder
Autoencoder consists of an encoder and a decoder, which is an unsupervised neural network including three layers: input layer, hidden layer and output layer, as shown in Fig 3. Its input data and output data have the same dimension. The output data should reproduce the input data to the maximum extent to maintain the consistency of input and output by optimizing the objective function [35]. It is assumed that each input data sample with m variables is represented as x = [x1, x2, …, xm]T ∈ Rm, and the output data sample is reconstructed as .
The encoder maps the input data x to the hidden layer by Eq (11) to obtain the hidden features h.
(11)
The decoder reconstructs the output x′ based on Eq (12) using the hidden feature h.
(12)
As an unsupervised learning framework, autoencoder is usually used for dimension reduction or feature extraction. By limiting the dimension of the hidden layer features h, the autoencoder can be forced to capture the most salient features in the training data and obtain the valuable features of the input data [36]. Since autoencoder can make feature extraction, we superimpose it on a deep learning model. The overall model uses salient data features to mine the underlying patterns of the data and achieve more accurate data predictions.
Recurrent neural network
The traditional neural network model is fully connected from the input layer to the hidden layer and then to the output layer, with no connection between the nodes in each layer. The current output of a sequence in the RNN is related to the previous outputs, as shown in Fig 4. The network memorizes and applies the previous information to calculate the current output. The nodes between hidden layers are connected, and the input of the hidden layer includes not only the output of the input layer but also the output of the hidden layer at the last moment [37].
Xt denotes the input at step t, and St is the state of the hidden layer at step t, which is the memory unit of the network. St is calculated based on the output of the current input layer with the state of the hidden layer at the previous step [38].
(13)
Wherein, U is the weight matrix from the input layer to the hidden layer, and W is the weights matrix from the hidden layer at the last moment to the hidden layer at the current moment. It can be seen that the value of St depends not only on Xt but also on St−1.
The value of the output layer is , where V is the weight matrix from the hidden layer to the output layer.
AE-RNN
Considering that supervised learning is required to predict food safety risks, the detection model consists of two neural networks, the autoencoder and the RNN, as shown in Fig 5. The training is carried out in two stages. Specifically, in the first stage, the detection data x is input to the autoencoder, the hidden layer vector h is obtained through the encoder, and then the output value x′ is obtained through the decoder. The autoencoder is trained and optimized to reconstruct the input data x to the greatest extent from the output data x′. The significant feature vector h of the input data is obtained by limiting the data dimensionality of the hidden layer to realize the feature extraction work. In the second stage, the feature vector h extracted from the autoencoder is fed by the RNN input layer. The input data h is nonlinearly transformed by the hidden layer. The output value is obtained by the output layer. The construction and optimization of the RNN model are completed by back propagation of the error between the output value
and the expected output y to improve the accuracy of risk prediction.
Risk prediction
After the model’s training, the risk values of unknown products are evaluated using the AE-RNN. By inspecting the risk value of each sample in the risk assessment, we set the thresholds of different risk levels of food testing samples (mainly unqualified and risk-warning samples). The new products are classified into different risk types of samples. Further traceability analysis is conducted for unqualified and risk-warning samples to facilitate the supervision departments taking relevant measures. The detailed analysis and relevant initiatives of the above samples are shown in the case study.
Case study: Risk warning for dairy products
In the case study, we used the self-test data of a dairy brand of sterilized milk in Guizhou Province, which were obtained in strict accordance with the GB 19645–2010 national standard for food safety of sterilized milk following the GB/T 5009 food hygiene test method for selecting specific test indicators, a total of 2017 sets. The technical requirements for sterilized milk are divided into raw material requirements, sensory requirements (color, taste, odor, tissue state), physical and chemical indicators (fat, protein, non-fat milk solids, acidity), contaminant limits, mycotoxin limits, and microbial limits. The data were obtained from the test reports, which have a limited variety of indicators, and the original report is shown in Fig 6.
Since some of the test results cannot be directly applied to the experiment, we have done the following treatment on some data. In order to ensure the authenticity and validity of the data, six valid indicators were retained.
- Delete the sensory quality characteristics of food that are not closely related to food safety, such as taste, color, shape, etc.
- Remove extra symbols. The test result of aflatoxin in the sample was “<0.2”. Since the standard limit of aflatoxin is less than 0.2, we deleted the “<” symbol in the test result and recorded the test value as “0.2”.
- Delete the useless information. Remove all samples of the test results that are “not detected” Melamine indicators.
After processing, each group of test data has 6 indicators, namely lactose, acidity, non-fat milk solids, fat, protein and aflatoxin. The original data format is shown in Table 2. We judge whether each test data meets the national food safety standard for sterilized milk according to the requirements of GB 19645–2010 Hygienic Standard for Sterilized Milk.
Firstly, the comprehensive risk values of samples were calculated by the AHP-EW method for the 1987 sets of testing data from September 2016 to September 2021, which were used as the expected output risk value of the AE-RNN network to establish the risk warning model. The model was then used to predict the comprehensive risk value of the 30 testing data sets in October 2021. Finally, the samples with high-risk values in the predicted data are further risk warned and analyzed [29].
We performed data preprocessing for each index with a different limit in the data preprocessing session. We replaced each index value with its difference from the standard value as the data for risk calculation. The greater the difference, the farther the test value is from the standard value. The larger the risk value thus calculated, the greater the probability of disqualification of this test sample, which needs to attract the attention of manufacturers and regulatory authorities. After that, the weight percentages of each index are obtained by AHP-EW, as shown in Fig 7. The preprocessed testing data are multiplied and added with the weights of the corresponding indexes to complete the fusion of the detection indexes, and the comprehensive risk value is obtained. The preprocessed data were used as the input of the early warning model, and the comprehensive risk values were used as the expected output for the training of the AE-RNN risk early warning network.
The BP, the LSTM and the LSTM-Attention were used for model construction for the 1987 training sets, and risk prediction was performed for the 30 sets. The relevant parameters of the four neural network models were set separately. The BP, the LSTM and the LSTM-Attention are three-layer neural network structures with 6 nodes in the input layer, 1 node in the output layer and 16 nodes in the hidden layer. The learning rate is set to 0.001. The BP batch size is set to 8, the epoch is set to 500, and the LSTM and LSTM-Attention batch size is set to 2. The former epoch is set to 30, and the latter epoch is set to 100. The hidden layer nodes of AE-RNN are set to 3, and the 3-dimensional main features are extracted. The learning rate is set to 0.001, and the number of iterations is 1900. In addition, the average relative generalization error (ARGE), root mean square error (RMSE) and model convergence time were used to evaluate the model performance and verify that the AE-RNN model has a better risk prediction effect. The calculation equations are shown in Eqs (14) and (15), wherein zi is the expected output value and yi is the actual output value.
(14)
(15)
The ARGE and RMSE values for the four prediction models are shown in Table 3. The error values of AE-RNN achieve a multiplicative level improvement compared with the previous three models. It can be concluded that the generalization performance of AE-RNN is better compared with the other three models, and the gap between AE-RNN and the expected risk value is the smallest. The 5-fold cross-validation has verified the fitting data effect of the AE-RNN model. Although the prediction accuracy of AE-RNN has not been effectively improved, it indicates that the model is not over-fitted. The comparison of model training time shows that for the large volume of food inspection data in the regulatory process, the convergence time of BP, LSTM and LSTM-Attention models is too long. The convergence time of the LSTM-Attention model even reaches more than 30 minutes, which lacks practical significance. However, AE-RNN achieves a more accurate prediction effect in only 14 seconds, meeting the demand for accurate and efficient risk prediction in food regulation.
To further validate the generalization of the model, we conducted experiments on two additional data, camel milk and raisin data. We collected 427 camel milk data containing 14 indicators of protein, fat, recovered milk acidity, impurity level, moisture, lead, total arsenic, chromium, nitrite, aflatoxin, total bacterial colony, coliform, Staphylococcus aureus, and melamine according to the local standard DBS65 014–2017 of Xinjiang Uygur Autonomous Region. The reference standard for raisin data is the food safety standard NY-T 705–2003 for seedless raisins, which contains 368 test samples including sulfur dioxide residue, lead, moisture, total acidity, mold, ash, protein, and sodium indicators. The original data and the comprehensive risk assessment values were input into BP, LSTM-Attention, LSTM, and AE-RNN, and the fitting data results and time spent for each model are shown in Tables 4 and 5. AE-RNN still achieved the best-fitting results and took very little time.
The fitting curves of the BP, the LSTM-Attention, the LSTM, and the AE-RNN are shown in Fig 8. The BP network has the worst data fitting effect and the most significant gap with the expected value. The prediction effect of the LSTM-Attention network is better than the BP network, but there is still a certain gap with the actual value. The prediction performance of the LSTM-Attention and the LSTM is improved in order compared with the BP. However, the AE-RNN has the best data fitting effect. The predicted value curve almost overlaps with the expected value curve on some samples, and the accuracy of numerical prediction is the highest.
The relative error distributions of four neural networks are shown in Fig 9, which shows that more than 50% of the samples in the BP model have prediction errors greater than 5%, and even one sample almost reaches 16%. In the LSTM-Attention model, compared with the BP model, although the error is reduced and 75% of the sample error values are controlled at about 5%, there are still a small number of samples with an error 7%. The relative error of the LSTM model is reduced compared with the previous two. 75% of the samples can be controlled at less than 2%, but the error reaches about 6%. For the AE-RNN, the relative error of more than 75% of the samples was below 1%. It is concluded that the generalization ability of AE-RNN is more robust than that of BP, LSTM-Attention, and LSTM.
We performed ablation experiments for AE and RNN structures, respectively. As shown in Table 6, AE-RNN reduces the fitting time by 4s compared with RNN. The analysis shows that AE achieves the dimensionality reduction of features by feature compression, which improves the efficiency of subsequent model data fitting. RNN has smaller values in two evaluation indexes, ARGE and RMSE, compared with MLP, which indicates that RNN fits the data better and makes more accurate predictions. Therefore, AE-RNN is more efficient and accurate in predicting data than common neural networks.
The samples with single or multiple indexes exceeding the national limits are “unqualified” samples. Such samples have food safety hazards of excessive contaminants or nutrient deficiencies, and human consumption will cause health problems. Producers need to destroy the same or related batches of dairy products immediately.
The number of samples that exceeded the limits for single or multiple indexes was statistically obtained as 11 in the 1987 samples. We ranked the risk values obtained by AHP-EW. By verifying the sample IDs, the top 11 samples with the highest risk values were indeed unqualified(Fig 10). The risk value of the 11th sample is 0.281, and the highest risk value of the qualified samples is 0.267(Fig 11). We define the threshold of the unqualified samples as 0.28 based on the rounding principle. Samples with risk values higher than 0.28 are unqualified samples, and samples lower than 0.28 are qualified samples.
The goal of risk quantification and early warning is not only to locate unqualified samples but also to identify qualified samples with high risk and safety hazards to draw regulatory authorities’ attention to high-risk samples and minimize the probability of food safety accidents. Although samples with risk values lower than 0.28 are qualified, some samples have a higher risk because one or more indexes are close to the national limits, and we define such samples as “risk-warning” samples. The qualified samples were classified into risk categories based on the distribution of risk values. In order to determine the threshold for “risk-warning” samples, we invited a panel of food safety experts from our collaborators to evaluate the samples in different risk categories to determine whether they should be classified as “risk-warning” samples. The risk range for the qualified samples was [0.08,0.28]. We segmented at 0.05 intervals and classified the qualified samples into four categories: “1”, very low risk, with a value range of [0.08,0.13]; “2”, low risk, with a value range of (0.13,0.18]; “3”, medium risk, with a value range of (0.18,0.23]; “4”, high risk, with a value range of (0.23,0.28]. The distribution of risk value intervals for the qualified samples is shown in Fig 12.
Among the qualified samples, we sequentially put samples into the test set of expert evaluation with a spacing of 50 samples. Finally, we obtained a test set containing two samples of category “1”, five samples of category “2”, 29 samples of category “3”, and six samples of category “4”. Three experts analyze the raw test data from the test set and determine whether the samples need risk warnings based on their expertise and experience. The statistical results of the Food Safety Expert Panel are shown in Table 7.
From the statistical results, the Food Safety Expert Panel concluded that the samples requiring early warning were concentrated in the category “4”, i.e., samples with risk values in the range of (0.23,0.28]. Therefore, the threshold for “risk warning” samples was determined to be 0.23, and samples with a risk value greater than 0.23 were considered “risk-warning” samples. Samples with risk values lower than 0.23 are considered low-risk samples.
Due to the short time span of the data collection of camel milk and raisins, no unqualified samples were found for the time being, so we only carried out the risk classification of qualified samples. The risk value domain for the camel milk data was in the range of (0.01,0.65), and the risk value domain for the raisin data was distributed in the range of (0,0.0034), and the distribution intervals of the two data are shown in Figs 13 and 14. We determined the “risk warning” sample thresholds for camel milk and raisins as 0.49 and 0.000255, respectively, according to the above process and after the expert review.
The comprehensive risk value of testing data in October 2021 is shown in Fig 15. The risk value of the fifth sample is 0.281, which has exceeded the threshold value of unqualified samples. The supervision department should immediately urge the manufacturer to interrupt the production process in the production line related to this sample, further analyze the value of the individual index of this sample, and strictly monitor the index in the following production process. The 15th, 19th and 25th high-risk samples are 0.235, 0.24 and 0.237, respectively, which exceeded the established threshold for risk-warning samples. Given the principle of minimizing food safety risk, the supervision department still has to conduct a follow-up risk analysis on relevant samples to identify the individual index that causes high risks and urge manufacturers to control all food production indexes strictly. The risk value of the 30th index is 0.179, which is lower than that of other samples in October 2021, so the safety state is reliable. The risk value of other samples is about 0.21, which does not exceed the early warning threshold, and the food safety state is relatively stable.
After a further risk analysis of the relevant samples, the 5th unqualified sample does not meet the national requirements for non-fat milk solids in sterilized milk. The 15th, 19th and 25th risk-warning samples had lower fat content than others, so the comprehensive risk value is higher. Based on the detailed risk analysis, the supervision departments can pay more attention to these two indexes in this manufacturer and supervise the manufacturer to produce high-quality food in accordance with the national standard content in the production process to avoid food safety incidents.
In addition, we analyzed the potential sources of bias in the experiment. In general, there are three main aspects of potential sources of bias affecting model accuracy [39]: 1. correct identification of statistical objects (data) relevant to the research question. 2. use of representative training data for predictive modeling. 3. selection of representative features to predict the target variable.
First, correctly identify the statistical objects (data) relevant to the research question. Sample selection bias may occur in the sampling process. We conducts random sampling among unencapsulated batches of the same sterilized milk to represent the overall data of this batch. There is no sample selection error caused by the selected specific sample not being fully representative of the whole. In addition, the test data may be noisy and inaccurate. We ensure that the framework accurately learns the overall characteristic distribution of the data through standardization of data preprocessing, increasing the sample size, and final expert review.
Second, representative training data for conducting predictive modeling [40]. Generally, qualified data dominate food testing data, and unqualified data are scarce. In order to avoid the model learning biased patterns using only qualified samples as training data, we use training data with a sample collection period of 6 years, which contains a certain amount of unqualified samples in real situations. The model fully learns the data characteristics of the qualified and unqualified samples.
Finally, representative features were selected to predict the target variables. The indicators we used for dairy products are in the GB 19645–2010 standard. To ensure the authenticity of our data, we included all available indicators in our evaluation system after pre-processing.
In addition, the cost-benefit analysis and practicality of the method in a real environment are important. First, we analyze the time complexity of the model. The AE-RNN is mainly divided into auto-encoder (AE) and Recurrent neural network (RNN). According to h = σ1(W1x + b1), x′ = σ2(W2h + b2), where the number of samples, hidden layer neurons and input size are n, m, d, respectively, we can know that the computation process is [d, m] × [m, n] + [d, n] and [m, d] × [d, n] + [m, n], both with complexity . According to ht = f(Uxt + Wht−1), it is known that the computation process is Uxt: [d, m] × [m, n] with the complexity of
, and Wht−1: [d, d] × [d, n] with the complexity of
, so the computational complexity of the RNN module is
. The time complexity of the whole model that is
. From the above analysis, it is known that the model has low time complexity and has the potential to handle large amounts of data in realistic scenarios.
Further, the experimental results are shown in Tables 3–5. The framework is highly efficient for high-dimensional data (camel milk data) and high-volume data (dairy data). It has an extremely low time cost compared to the current mainstream machine learning models such as BP, LSTM, and LSTM-Attention, which can save about six times the time compared to the optimal model. The data prediction is the most accurate and significantly improves over the optimal prediction of other models. Theoretically, the accurate and fast prediction results are mainly due to the feature compression of AE and the simple network structure and “memory” function of RNN. At the same time, the algorithmic framework is an end-to-end intelligent algorithmic framework. We integrate the AHP-EW algorithm, which requires computational power, into our algorithmic code, and the overall algorithmic model requires a low configuration environment. Users only need a computer and the collected data to obtain the risk values of input samples through the online platform, enabling low-threshold and low-cost food safety risk prediction.
Finally, we are actively preparing for the launch of this framework in the risk assessment platform of our partner for real scenarios of dairy product testing data. Specifically, the application process of our algorithmic framework is as follows(Fig 16): in the first step, we train the model with relevant detection data and wait for the model to finish data fitting before retaining the network parameters. In the second step, the regulator can upload new detection data to the risk assessment platform, and the data enters the established prediction model to complete a feed-forward. The platform can output the risk prediction value of this batch of data. The regulator can take appropriate measures according to the risk threshold.
Conclusion
In this paper, a framework for food safety risk pre-warning and control based on AHP-EW and AE-RNN was proposed. This framework was applied to the testing data of sterilized milk of a dairy product brand in Guizhou Province, China. The comprehensive risk values of the product were obtained using the AHP-EW method and used as the expected output of AE-RNN for network construction to achieve risk prediction of the new product. By comparing the three models of the BP, the LSTM and the LSTM-Attention, it is verified that the AE-RNN model has a shorter convergence time, the best data fitting effect and the highest data prediction accuracy.
In the later stage, the supervision departments can further track the high-risk and unqualified samples for detailed risk analysis based on the risk prediction results of this paper’s model and check the single indexes that cause the increase in risk value. The detailed risk analysis will help the relevant departments to take accurate control, urge food manufacturers to make targeted rectifications, and avoid food safety incidents.
In future work, we will explore the processing methods of other data, such as descriptive indexes, expand more indexes that can be included in risk early warning, and make this risk warning model more applicable in the food industry. We will further explore multi-source information fusion methods for risk evaluation, such as evidence theory [41]. In addition, the overall training speed of the RNN model is slow, and the gradient may disappear in training. In this paper, the RNN and the auto encoder are combined to extract features to improve the training speed, and the skills are used in the model training to avoid possible defects. In the future, we will select a more suitable neural network model to achieve better prediction performance.
Supporting information
S1 File. Training data for establishing risk early warning model.
A total of 1987 sets of dairy product testing data were collected from September 2016 to September 2021, and the data dimension was 6.
https://doi.org/10.1371/journal.pone.0284144.s001
(CSV)
S2 File. Test data for risk analysis.
30 sets of data for testing and risk analysis were from October 2021, with a data dimension of 6.
https://doi.org/10.1371/journal.pone.0284144.s002
(CSV)
S3 File. Camel-milk data.
427 sets of data, with a data dimension of 26.
https://doi.org/10.1371/journal.pone.0284144.s003
(CSV)
S4 File. Rasine data.
368 sets of data, with a data dimension of 8.
https://doi.org/10.1371/journal.pone.0284144.s004
(CSV)
References
- 1. Han G, Zhai Y. Risk society and the politics of food safety problems in China. Japanese Journal of Political Science. 2022; p. 1–15.
- 2.
WHO. WHO estimates of the global burden of foodborne diseases: foodborne disease burden epidemiology reference group 2007–2015. World Health Organization. 2015;.
- 3. Liu R, Gao Z, Snell HA, Ma H. Food safety concerns and consumer preferences for food safety attributes: Evidence from China. Food Control. 2020;112:107157.
- 4. Yan H, Xu D, Meng H, Shi L, Li L. Food poisoning by clenbuterol in China. Quality Assurance and Safety of Crops & Foods. 2015;7(1):27–35.
- 5. Lu F, Wu X. China food safety hits the “gutter”. Food Control. 2014;41:134–138.
- 6. Zuo E, Aysa A, Muhammat M, Zhao Y, Chen B, Ubul K. A food safety prescreening method with domain-specific information using online reviews. Journal of Consumer Protection and Food Safety. 2022; p. 1–13.
- 7. Huang L. IPMP 2013—a comprehensive data analysis tool for predictive microbiology. International journal of food microbiology. 2014;171:100–107. pmid:24334095
- 8. Huang L, Hwang CA. One-step dynamic analysis of growth kinetics of Bacillus cereus from spores in simulated fried rice–Model development, validation, and Marko Chain Monte Carlo simulation. Food Microbiology. 2022;103:103935. pmid:35082061
- 9. Soon JM, Brazier AK, Wallace CA. Determining common contributory factors in food safety incidents–A review of global outbreaks and recalls 2008–2018. Trends in Food Science & Technology. 2020;97:76–87.
- 10. Qin Y, Li H, Li C. Application of control chart analysis in food safety early warning and rapid response. Quality Safety Inspection and Testing. 2004;05:47–50.
- 11. Xiao-Lin GU, Zhang DW, Zhang K, Xu-Jin PU, Cao WB. The Information Pre-warning Model of Food Safety Based on Association Rules Mining. Soft Science. 2011;25(11):6.
- 12. Ru B. Food Safety Pre-warning: A Research Based on Signal Analysis. Journal of Intelligence. 2014;33(9):5.
- 13. Bouzembrak Y, Marvin HJ. Prediction of food fraud type using data from Rapid Alert System for Food and Feed (RASFF) and Bayesian network modelling. Food Control. 2016;61:180–187.
- 14. Shen J, Shuxin DU, Luo Y, Luo J, Yang Q, Chen Z. The Comprehensive Risk Assessment Model for Imported and Exported Food. Bulletin of Science and Technology. 2012;28:8.
- 15. Lin X, Cui S, Han Y, Geng Z, Zhong Y. An improved ISM method based on GRA for hierarchical analyzing the influencing factors of food safety. Food control. 2019;99:48–56.
- 16. Huang Y, Kangas LJ, Rasco BA. Applications of artificial neural networks (ANNs) in food science. Critical reviews in food science and nutrition. 2007;47(2):113–126. pmid:17364697
- 17. Zhang D, Xu J, Xu J, Li C. Model for food safety warning based on inspection data and BP neural network. Transactions of the Chinese Society of Agricultural Engineering. 2010;26(1):221–226.
- 18. Han Y, Wu H, Jia M, Geng Z, Zhong Y. Production capacity analysis and energy optimization of complex petrochemical industries using novel extreme learning machine integrating affinity propagation. Energy Conversion and Management. 2019;180:240–249.
- 19. Mao D, Wang F, Hao Z, Li H. Credit evaluation system based on blockchain for multiple stakeholders in the food supply chain. International journal of environmental research and public health. 2018;15(8):1627. pmid:30071695
- 20. Geng Z, Shang D, Han Y, Zhong Y. Early warning modeling and analysis based on a deep radial basis function neural network integrating an analytic hierarchy process: A case study for food safety. Food control. 2019;96:329–342.
- 21. Geng Z, Liang L, Han Y, Tao G, Chu C. Risk early warning of food safety using novel long short-term memory neural network integrating sum product based analytic hierarchy process. British Food Journal. 2021;.
- 22. Yan S, Yan X. Design teacher and supervised dual stacked auto-encoders for quality-relevant fault detection in industrial process. Applied Soft Computing. 2019;81:105526.
- 23.
Nanduri A, Sherry L. Anomaly detection in aircraft data using Recurrent Neural Networks (RNN). In: 2016 Integrated Communications Navigation and Surveillance (ICNS). Ieee; 2016. p. 5C2–1.
- 24. Zhao J, Tian J, Meng F, Zhang M, Wu Q. Safety assessment method for storage tank farm based on the combination of structure entropy weight method and cloud model. Journal of Loss Prevention in the Process Industries. 2022;75:104709.
- 25. Zhu Y, Tian D, Yan F. Effectiveness of Entropy Weight Method in Decision-Making. Mathematical Problems in Engineering. 2020;2020:3564835.
- 26. Tang Y, Chen Y, Zhou D. Measuring Uncertainty in the Negation Evidence for Multi-Source Information Fusion. Entropy. 2022;24. pmid:36359686
- 27. Geng S, Shao C, Zhu L. Energy efficiency evaluation based on DEA integrated factor analysis in ethylene production. Chinese journal of chemical engineering. 2017;25(6):793–799.
- 28. Zuo E, Du X, Aysa A, Lv X, Muhammat M, Zhao Y, et al. Anomaly Score-Based Risk Early Warning System for Rapidly Controlling Food Safety Risk. Foods. 2022;11:2076. pmid:35885319
- 29. Geng Z, Zhao S, Tao G, Han Y. Early warning modeling and analysis based on analytic hierarchy process integrated extreme learning machine (AHP-ELM): Application to food safety. Food Control. 2017;78:33–42.
- 30. Geng Z, Liu F, Shang D, Han Y, Shang Y, Chu C. Early warning and control of food safety risk using an improved AHC-RBF neural network integrating AHP-EW. Journal of Food Engineering. 2021;292:110239.
- 31. Li Q, Zhou H, Ma Q, Lu L. Evaluation of Serviceability of Canal Lining Based on AHP–Simple Correlation Function Method–Cloud Model: A Case Study in Henan Province, China. Sustainability. 2021;13(21):12314.
- 32. Hu Y, Li W, Wang Q, Liu S, Wang Z. Evaluation of water inrush risk from coal seam floors with an AHP–EWM algorithm and GIS. Environmental Earth Sciences. 2019;78(10):1–15.
- 33. Lu Q, Li X, Li W, Chen W, Li L, Liu S. Risk evaluation of bed-separation water inrush: a case study in the Yangliu coal mine, China. Mine Water and the Environment. 2018;37(2):288–299.
- 34. Hao J, Zhang Y, Jia Y, Wang H, Niu C, Gan Y, et al. Assessing groundwater vulnerability and its inconsistency with groundwater quality, based on a modified DRASTIC model: a case study in Chaoyang District of Beijing City. Arabian Journal of Geosciences. 2017;10(6):1–16.
- 35. Wang Y, Yao H, Zhao S. Auto-encoder based dimensionality reduction. Neurocomputing. 2016;184:232–242.
- 36.
Meng Q, Catchpoole D, Skillicom D, Kennedy PJ. Relational autoencoder for feature extraction. In: 2017 International Joint Conference on Neural Networks (IJCNN); 2017. p. 364–371.
- 37. Sherstinsky A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D: Nonlinear Phenomena. 2020;404:132306.
- 38. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. pmid:9377276
- 39. Barocas S, Selbst AD. Big data’s disparate impact. California law review. 2016; p. 671–732.
- 40. Faraway JJ, Augustin NH. When small data beats big data. Statistics & Probability Letters. 2018;136:142–145.
- 41. Tang Y, Tan S, Zhou D. An Improved Failure Mode and Effects Analysis Method Using Belief Jensen–Shannon Divergence and Entropy Measure in the Evidence Theory. Arabian Journal for Science and Engineering. 2022; p. 1–14.