Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Overview of the model design.

Summarize the overall process of the experiment. The raw data in the MIMIC-III database was preprocessed, and the data set was awakened and randomly grouped: 75% data was used for model training; the remaining 25% data was used for model testing, and comparative experiments were conducted to obtain the final experimental results.

More »

Fig 1 Expand

Table 1.

The relationship between real categories and recognition results.

More »

Table 1 Expand

Fig 2.

Algorithm 1.

MIN–BER–FS algorithm.

More »

Fig 2 Expand

Fig 3.

Flow diagram for patient selection.

According to the ARDS diagnostic criteria, the appropriate enrolled population was selected from more than 40,000 patients in the MIMIC-III database, and 8702 eligible patients were finally included, and the data sets were randomly divided into training sets and test sets.

More »

Fig 3 Expand

Table 2.

Patient demographics in training and test sets (ICU: Intensive Care Unit, CSRU: Cardiac Surgery Recovery Unit, MICU: Medical Intensive Care Unit, CCU: Coronary care unit, SICU: Surgical intensive care unit, TSICU: Trauma Surgical Intensive Care Unit).

More »

Table 2 Expand

Table 3.

Patient characteristics in training and test sets (Nisbp: Noninvasive systolic blood pressure, Nidbp: Noninvasive diastolic blood pressure, Nimbp: Noninvasive mean blood pressure, OI: (FiO2×Mean air pressure)/PaO2, OSI: (FiO2×Mean air pressure)/SpO2.

More »

Table 3 Expand

Table 4.

Physiological parameter scores and rankings for different feature selection methods.

More »

Table 4 Expand

Fig 4.

Feature selection based on the four methods discussed in this study.

The X-axis is the feature number, the y-axis is the BER average of the ten-fold cross-validation, and the gray shaded area is the BER standard deviation of the ten-fold cross-validation under a specific feature subset. The figure shows the trend of BER changes of the four algorithms in the process of adding features step by step. The position of the green circle is the optimal feature subset of the algorithm, and the red triangle is the smallest feature subset.

More »

Fig 4 Expand

Table 5.

Identification results of the four algorithms on the training set for different feature subsets.

More »

Table 5 Expand

Fig 5.

AUC of the four tested algorithms on the training set for different feature subsets.

Based on the feature selection experiment, the training data set is used to study the recognition performance of four machine learning algorithms under different feature subsets.

More »

Fig 5 Expand

Table 6.

Identification results of four algorithms on test sets for different feature subsets.

More »

Table 6 Expand

Fig 6.

ROC curves of the applications of the four algorithms studied herein on the test dataset.

According to the experimental results of the training set, the four machine learning algorithms and the Rice linear model are used to identify the performance of the minimum feature subset on the test set data, and the ROC curve of each algorithm is drawn.

More »

Fig 6 Expand