Figures
Abstract
To address the problem of high failure rate and low accuracy in computed tomography (CT) image edge segmentation, we proposed a CT sequence image edge segmentation optimization algorithm using improved convolution neural network. Firstly, the pattern clustering algorithm is applied to cluster the pixels with relationship in the CT sequence image space to extract the edge information of the real CT image; secondly, Euclidean distance is used to calculate similarity and measure similarity, according to the measurement results, convolution neural network (CNN) hierarchical optimization is carried out to improve the convergence ability of CNN; finally, the pixel classification of CT sequence images is carried out, and the edge segmentation of CT sequence images is optimized according to the classification results. The results show that the overall recognition rate of this method is at a high level. The training time is obviously reduced when the training times exceed 12 times, the recall rate is always about 90%, and the accuracy of image segmentation is high, which solves the problem of large failure rate and low accuracy.
Citation: Wang X, Wei Y (2022) Optimization algorithm of CT image edge segmentation using improved convolution neural network. PLoS ONE 17(6): e0265338. https://doi.org/10.1371/journal.pone.0265338
Editor: Anandakumar Haldorai, Sri Eshwar College of Engineering, INDIA
Received: August 10, 2021; Accepted: March 1, 2022; Published: June 3, 2022
Copyright: © 2022 Wang, Wei. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Readers can access the data supporting the conclusions of the study from MURA data set and CT data set of imaging department of a hospital. MURA data that support the findings of this study are openly available at https://stanfordmlgroup.github.io/competitions/mura/. A hospital data that support the findings of this study are available from “The First Psychiatric Hospital of Harbin”. Restrictions apply to the availability of these data, which were used under license for this study. Data are available from 610984970@qq.com with the permission of The “First Psychiatric Hospital of Harbin."
Funding: This work is supported by Heilongjiang Provincial Department of Education Natural Science Research Project (NO.2016-KYYWF-0560 and the surface scientific and research projects of jiamusi university (NO. L2012-075)
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
People pay more and more attention to the living environment and health, especially health problems [1, 2]. Medical imaging can provide patients with more intuitive, clearer and more accurate diagnosis with higher accuracy [3]. With the continuous development of medical imaging technology, doctors’ diagnosis and treatment methods are becoming more and more important [4, 5]. Due to the wide application of new medical digital imaging technologies such as Magnetic Resonance Imaging (MRI), Computed Tomography(CT) and ultrasonic(Us), high-resolution medical image data can be obtained. However, reasonable and effective use of these valuable medical image data is the key to help doctors diagnose and treat. Computed tomography sequence image has the advantages of real-time, non-destructive, low cost, etc., and has been widely used in disease prevention, diagnosis and treatment. Computed tomography image processing methods have become one of the main directions of blood lipid at home and abroad [6, 7]. Deep learning is widely used in medical image processing, represented by convolutional neural networks, it has developed rapidly in recent years [8, 9]. Through the establishment of multiple hidden layer neural networks and a large amount of data training, more useful feature information can be extracted from the data, so that medical images can be processed more accurately, and better results have been obtained for computed tomography image processing [10, 11].
Literature [12] proposed image segmentation processing based on convolutional neural network, the neural network is trained by data multi-scale feature fusion and residual connection, the convolution neural network segmentation model is optimized, and the image with high definition is segmented by spline interpolation; Literature [13] combined residual learning and densely connected network to fully extract image features, and added hollow convolution to the neural network. The improved neural network has higher accuracy and sensitivity in processing images; Literature [14] fused image feature information, according to the improved convolution neural network, the improved network is used to roughly segment the image, and a batch regularization layer is added after each convolution layer to speed up the convergence and improve the segmentation accuracy.
In addition, due to the poor visibility of computed tomography sequence images and easy to observe changes, it is impossible to determine the size and shape of relevant organs in most clinical practice. At the same time, manual segmentation is mainly used in clinic. However, the repeatability of manual segmentation is poor and time-consuming. It largely depends on the experience and ability of doctors. The edge segmentation results of different doctors are different, and the edge segmentation results of the same doctor at different times will be slightly different, that is, the repeatability is poor. Real time edge segmentation of computed tomography sequence images is an important technology in clinical applications such as brachytherapy. In previous studies, only computed tomography images were processed without considering their sequence attributes. The main contributions of this paper are as follows: (1) In this paper, the pixels with certain relationship in computed tomography sequence image space are clustered to obtain the edge information of computed tomography sequence image and de-noise it at the same time. (2) In order to solve the gradient disappearance problem of low-level neural network during back propagation. In this process, we not only consider the particularity of computed tomography sequence images, but also improve the convergence ability of convolution neural network.(3) The details of the target and the corresponding spatial dimensions are gradually recovered through network layers such as de-convolution layers. At the same time, hollow convolution can increase the receptive field and keep the weight parameters unchanged, which can effectively maintain the resolution of computed tomography sequence images.
The organization of this paper is as follows. We introduce the introduction and related works in Section1 and Section2. Then, we describe methodology in Section3. We further present the experimental results and discussions in Section4. Finally, we conclude the paper in Section 5.
2. Related work
For computed tomography image segmentation related issues, many studies have also appeared. The image segmentation method proposed by Literature [15] was based on a large number of prior segmentation methods. These methods are based on liver regions that are imprecise or require a lot of training. Literature [16] segmented and marked each cell in the bright-field image according to the morphological features of yeast cells to identify the seed points on the cell contour. The edge of yeast cells was successfully extracted from the light field image of sparsely distributed cells. The results show that the dense cell images can be segmented and marked correctly. Literature [17] improved the contrast between normal liver parenchyma and tumor tissue by adaptive piecewise nonlinear enhancement and iterative convolution operation. On this basis, the enhancement result and image boundary information are effectively integrated into the graph cut energy function, false segmentation regions are removed, and the initial segmentation results are optimized. This realizes the preliminary automatic segmentation of liver tumors, and realizes effective automatic segmentation of liver tumors in the computed tomography sequence. Literature [18] proposed a high-precision region segmentation method to constrain the continuous features of computed tomography images. This method takes the updated mean value of seed points and continuous segmentation as constraints, realizes the high-precision region segmentation of computed tomography image features, and reduces the possibility of holes and non-vascular information in the process of liver segmentation. In foreign research, Literature [19] proposed a new method for measuring voxel porosity and permeability of gray areas. The lattice evaluation method was used to evaluate the permeability of the reservoir, and computed tomography images were obtained. After obtaining the pore size distribution curve through experiments, according to the image gray scale, the integral pore volume, and the linear relationship between the computed tomography number and the porosity of the single element, the boundary condition constraints of image segmentation are increased. Literature [20] proposed a feature segmentation method based on the fuzzy edge of the image. After preprocessing, the impurities and noise in the image are removed, and the processed image is segmented by the fuzzy edge. Experiments show that this method can effectively improve the efficiency and accuracy of segmentation, and shorten the segmentation time of fuzzy edges.
On the basis of existing research, only computed tomography images were processed, and their sequence attributes were not considered. This paper proposed a computed tomography sequence image edge segmentation optimization algorithm based on improved convolutional neural network. Experiments show that the overall recognition rate of this method is high, the error rate is low, the recall rate is always around 90%, and the accuracy of image segmentation is much higher than other literature algorithms, which can be as high as 0.99, which has certain advantages.
3. Methodology
3.1 Integration of edge information of computed tomography sequence image
In order to obtain the edge information of the computed tomography sequence image, it is necessary to remove the edge information under the influence of noise. The images that meet the requirements are divided into different types by a simple pattern clustering method, and the edge information of the real image is extracted as much as possible [21]. Let automatic threshold be f, and the intersection n is obtained by intercepting multiple fitting curves in the data space. This intersection is the edge point of the image, and due to the threshold f0, the noise edge point is not intercepted and eliminated. At this time, the information at n is
(1)
where AG refers to image edge point information function at intersection n. Therefore, we can get the actual image edge point cluster set:
(2)
where k0 refers to initial cluster information, kn refers to final cluster information. By solving this equation, the position of the image edge point n can be obtained as follows:
(3)
where Ae refers to operation unit position of the number of clustering information, e and j are cluster dimensions. At this time, the edge points n of the computed tomography sequence image are detected in turn, and the noise edge points are eliminated in turn [22].
Because different features of computed tomography sequence image come from different feature extraction methods, they have different properties. For example, different machine parameters, different date range, different patients, etc., which require us to calculate the computed tomography sequence image super-pixel similarity, we should form an independent calculation similarity for different practical application properties [23, 24]. Therefore, if you want to process all different computed tomography sequence images at the same time, and the computed tomography sequence image features are different, then all the image features are combined to form a high-dimensional feature vector Av, and then use the Euclidean distance to calculate the similarity hp:
(4)
The super-pixel similarity hk of computed tomography sequence image features is:
(5)
where Fi refers to similarity measure. After normalization, the super-pixel similarity of computed tomography sequence image features will dominate the similarity measure, which weakens the low-dimensional similarity characteristics of super-pixels. Make different features have different weights m to balance the proportion of different features in the similarity calculation [25, 26] and reduce the proportion of irrelevant features GH:
(6)
The proportion GH of each detected image edge point is connected into a curve to obtain continuous image edge information.
3.2 Edge segmentation of computed tomography sequence image based on improved convolutional neural network
3.2.1. Analysis of convolution neural network improvement.
The edge information of the computed tomography sequence image can be extracted using the above process [27, 28]. When the distribution of the input value gradually shifts or changes, the entire distribution gradually approaches the upper and lower ends of the value range of the nonlinear function, resulting in the disappearance of the gradient of low-level neural network in the back-propagation process, which is the fundamental reason for the slow convergence speed of deep-seated neural network training [29, 30]. Convolution neural network has the ability to quickly converge, and can promote the edge segmentation of computed tomography images. According to the particularity of computed tomography sequence images, convolution neural network layering optimization is carried out. The optimization results are as follows:
- Input layer: According to different recognition objects, computed tomography images of specific size are input. This selects 28 * 28 and 50 * 50 medical images;
- Convolutional layer C1: The input image is convolved with six 5×5 convolution kernels to form six 24×24 feature maps, which are obtained by convolution operation (28–5+1 = 24)
- The down-sampling layer S1: Each feature scale of the C1 layer is scaled to 2, ensuring that six 12*12 down-sampling feature maps are formed without repeated down-sampling;
- Convolutional layer C2: Convolve sixteen 5×5 convolution kernels to form sixteen 8×8 (12–5+1 = 8) feature maps;
- The down-sampling layer S2 is composed of sixteen 4×4 feature maps, and each neuron is connected to the 2×2 neighborhood of the C1 layer;
- Fully connected layer F1: Each feature map is connected with all feature maps of S2 layer, and the feature map of S2 layer is connected into one-dimensional input vector;
- Output layer: The output is divided into two categories: edge segmentation success and sigmoid function
In the edge segmentation of the computed tomography sequence image, each pixel has corresponding case identification number information and semantic category information such as department and disease, the lesion feature information is more important, which can be predicted by actual branches and semantic branches [31, 32]. Taking instance branching as an example, the existing network framework is designed according to the evaluation index of instance edge segmentation, that is, only the prediction accuracy of each instance is considered, and the relationship between different types of instances is not processed. This method is difficult to determine the overlapping areas of different types of instances. In order to solve this problem, a new multi-scale network structure is adopted, which can be modified directly on the original network framework. This structure can be divided into two parts: (1) The bottom-up path is used as the original basic network, and the last layer feature map of each stage is used as the feature map to be fused. The feature map of each resolution is introduced into the feature map at twice the resolution, and each resolution feature map is superimposed pixel by pixel; (2) The top-down path corresponds to the upper sampling of the feature map. The high-resolution feature mapping is obtained by up sampling, so that the high-resolution feature mapping has a higher level of semantic information. It is shown in Fig 1.
It is seen from Fig 1 that the feature maps used for prediction of each layer are fused into feature maps with different resolutions and semantic levels, and feature maps with different resolutions are fused into corresponding resolutions. This method not only guarantees the resolution of each layer, but also guarantees strong semantic features. It not only adds cross-layer connections to the original basic network, but also has low requirements for calculation time and speed, which improves the convergence speed.
3.2.2. Computed tomography sequence image edge segmentation optimization algorithm.
Generally, the convolution neural network edge segmentation algorithm is to classify the input image. However, the computed tomography sequence image has sequence attributes, and the amount of calculation during classification is large, and the best single resolution cannot be guaranteed. Therefore, this method is adopted. Based on the coding and decoding, this paper proposed a practical deep fully convolutional neural network. A low-resolution encoder transforms the image to the full input-resolution feature map to perform pixel classification of computed tomography sequence images, optimize edge segmentation, and the algorithm is as follows:
Input: Computed tomography sequence image original pixels and cluster information;
Output: Computed tomography sequence image edge segmentation result.
Initialize the computed tomography sequence image information and the convolutional neural network, and use the improved convolutional neural network to optimize the image edge segmentation. The steps are as follows:
- The decoder uses its low resolution s input characteristic map and the pool index calculated by the maximum pool steps Smj of the corresponding encoder for nonlinear up sampling. The sampled feature vector can be expressed as:
(7)
Since the up-sampled feature map is sparse, then E(A) is convolved with a trainable filter to generate a dense feature map:(8)
- Due to the superposition of maximum pooling and sub-sampling
, the boundary detail loss coefficient Ci increases, and boundary information can be captured and stored in the encoded feature map. At this time, the border information
can be shown as:
(9)
- In order to improve efficiency, only the boundary information of the largest pool index is stored here. For each 2x2 pool window, the step size is 2 in principle. The storage efficiency is higher than the storage feature map with floating precision. It is worth noting that the final decoder generates a multi-channel feature map and sends the input information to the classifier to obtain the pixels of the multi-channel image. Each pixel is classified independently, and the classification function is:
(10)
- The predicted edge segmentation has the largest probability for each pixel, that is, the class with the largest probability. However, since the step size is reduced, the maximum probability class xmax(i) is expressed as:
(11) where (i,j) is the pixel coordinates.
- In this way, we can get a denser feature map. However, it also raises another question: the change of perception space. It is felt that the field is directly related to the step length, that is, according to Formula 11. If the step size becomes smaller and the sensory field does not change, the convolution kernel should be increased. Therefore, the last sampling operation below the maximum pool layer will be removed, and the filter will be up-sampled. The sampling matrix is:
(12) where
and
represent the maximum and minimum convolution kernels, respectively.
- When the resolution of the down-sampling feature image decreases, the hole convolution is inserted between the continuous filter values to increase the receptive field in the convolution process, which can effectively improve the resolution of the feature map and improve the image quality. The hollow convolution function is:
(13)
In the final feature aggregation layer, all the fully connected layers are replaced by hollow convolution. N refers to output features, r refers to void factor. wk is the k-th parameter of the convolution kernel.
The hollow convolution with different coefficients is used to stack to obtain a larger receptive field, which can obtain richer multi-scale image feature information in the encoder stage. It can not only better identify the lesion area of edge segmentation, but also obtain a smooth edge contour, so as to realize the optimization algorithm design of computed tomography sequence image edge segmentation. - End.
Based on the above algorithm process, the optimal edge segmentation of computed tomography sequence images can be realized, as shown in Fig 2.
4. Experiments and results
4.1. Experimental environment
The software and hardware environment involved in this experiment are as follows: Hardware environment: hard disk is 3.5tb, memory is 32.0gb, Intel Xeon CPU is e5-2407 v2@2.40GHz. Software environment used Windows 7 operating system and MATLAB R2014b.
4.2. Data sets
The data source of this experiment is MURA data set and computed tomography data set of imaging department of a hospital. MURA data set: released by Stanford, one of the largest public radiographic image data sets. In this experiment, t chest x-ray diagnosis of radiation pneumonia images was selected as one of the data sources, and a total of 5000 lung computed tomography images were selected. Computed tomography data set of imaging department of a hospital: The data of 5000 cases of lung in situ computed tomography images in the imaging department of a municipal hospital were collected.
4.3. Experimental steps
- In order to avoid over fitting caused by too small data sets, lung tumors are not limited to a specific type of lung tumors, and only the image data marked by experts as lung cancer are used as experimental data. The experimental data includes images of 200 lung tumors images and 800 normal lungs images.
- Image preprocessing: the false color in the computed tomography image was removed, the local features of the three modal images in the region of interest area was extracted, and it was normalized to 28 * 28 and 50 * 50 experimental images.
- Construct different sample spaces: Since the three modes of computed tomography, and positron emission tomography / computed tomography correspond to different image data of the same patient, computed tomography, positron emission tomography and positron emission tomography / computed tomography are constructed through pre-extraction, pre-processing, convolution neural network training and testing of region of interest area. Three different sample spaces for computed tomography.
- Single convolution neural network structure: According to the given method, an improved convolution neural network model is constructed by using the parameter transfer method, and its performance is compared comprehensively. The effectiveness and superiority of this method are verified by studying the influence of the number of iterations and batch information on the recognition efficiency and training time of the convolutional neural network model. By studying the effects of iteration times and batch information on the recognition efficiency and training time of convolutional neural network model, the effectiveness and superiority of this method are verified.
4.4. Experimental indicators
- The relationship between the number of iterations, recognition rate and training time: In the convolution neural network iteration layer, when a series of operation steps are judged to be repeated, the accuracy of the latter is obtained by the former in turn. Each result of this process can be obtained by performing the same operation steps on the previous results. Therefore, its calculation formula is:
(14) where un is the execution amount of repeated instruction n.
- Misclassification rate: Taking the influence of the amount of information of batch computed tomography image application on the recognition rate and training time as an example, to determine the relationship between the number of training pS and the training time TK, the judgment formula is:
(15) where Tq refers to the times to identify errors, and TP refers to total identification times.
- Recall rate: When segmenting, set as much multi-scale image feature information as + as possible. The larger the ratio of the amount of retrieved relevant information to the total amount of relevant information in the system, the more +, the higher the recall rate high.
- CT sequence image edge information integration effect: In the computed tomography sequence image edge information integration, the key point is to accurately calculate the edge point position of the image Ae. The edge information integration effect of CT sequence images in this paper is verified by comparing the position Ae of image edge points calculated by Formula 3 with the proportion of actual image edge points.
- Image segmentation accuracy: The accuracy of image segmentation is an indicator. The performance of this algorithm is verified by comparing this algorithm with other algorithms. The accuracy calculation formula is as follows:
(16) where Ji refers to calculated image segmentation result, and J refers to actual image segmentation result.
4.5. Results and discussion
In order to ensure the unity of the data itself and to better verify the overall performance of the improved convolutional neural network, the relationship between the number of iterations and the recognition rate in the improved convolutional neural network model is discussed. It is shown in Fig 3.
According to Fig 3. In the process of iteratively extracting the edge information of the original image, when the number of iterations increases, the recognition rate rises slightly. The method shows a downward trend in the number of iterations of 100–300, and an upward trend before and after 300–500 iterations. However, the overall recognition rate is higher than the other five algorithms. This shows that this method can simultaneously extract the detailed features and higher-level features of the original image. This is because traditional image recognition methods are based on the region of interest of the entire image, first perform edge segmentation, then extract features, and use a suitable classifier for recognition. This tedious process is simplified by the convolution and down-sampling operations of the convolution neural network, and is directly completed by the convolution and down-sampling operations of the convolution neural network.
Since the omnidirectional boundary segmentation of lung images needs to acquire and learn the basic information of lung images, so as to achieve the purpose of accurate recognition, the ability of different methods to obtain original lung image information is compared. Taking the results of batch information acquisition as an example, the false detection rates of the five algorithms are compared. It is shown in Fig 4.
According to Fig 4. Regardless of the number of training times, the error rate of the five methods decreases with the increase of the number of training times. When the number of training times exceeds 12, the error rate of the algorithm in this paper drops sharply and the recognition rate increases. And as the number of training continues to increase, the recognition rate remains at a high level, and there is basically no training. The training effect of other methods is not ideal, especially when the number of training is 12. The error rate of Literature [15], Literature [16] and Literature [17] algorithms still remain at a high level, and there is no significant drop. The error rate of Literature [19] and Literature [20] algorithms has decreased. However, it is still much higher than the method in this paper. The reason for this difference is that the method in this paper optimizes the convolution neural network hierarchically and improves the convergence of the convolution neural network. This can train a large amount of data, reduces training time, has a higher recognition rate, and also reduces training time.
In the process of computed tomography image segmentation, this paper obtains richer multi-scale image feature information in the encoder stage. The recall of feature information reflects the richness of information acquisition of the segmented image in the process of extracting features, analyzes the relationship between recall curves of different methods, and compares the results of recall. It is shown in Fig 5.
According to Fig 5, it can be seen that the recall curve of the method in this paper is higher than that of other literature methods, and it has a higher advantage. The recall rate is always around 90%, while Literature [15], Literature [16], Literature [17] and Literature [19] algorithm, the recall rate decreases with the increase of the iteration coefficient. Although the Literature [20] algorithm increases its recall rate with the increase of the iteration coefficient, the highest is about 75%. It can be seen that the recall rate of this method is higher than other methods. This is because the hollow convolution is inserted between successive filter values, which increases the receptive field during the convolution process, which can effectively improve the resolution of the feature map. In the encoder stage, richer multi-scale image feature information can be obtained, and the initial recall rate is increased. When the scale of the image feature information remains unchanged, the overall recall rate increases.
- The comparison results of image segmentation accuracy in different literature are shown in Fig 6.
According to Fig 6 the image edge points of the algorithm in this paper are tightly distributed on both sides of the distribution line, and the distribution is relatively uniform, which is in good agreement with the actual distribution of the image edge. Other Literature algorithms are scattered and far away from the distribution line, and the distribution is scattered and uneven. This is because the algorithm in this paper uses the threshold f0 limit, and the noise edge points are not intercepted to be eliminated, thereby reducing the interference of the noise edge points, and the distribution result is closer to reality.
The comparison results of image segmentation accuracy in different literatures are shown in Table 1.
By analyzing the results in Table 1, it can be seen that the image segmentation accuracy of the Literature [20] algorithm is the lowest, Not more than 0.50, followed by the Literature [16] and Literature [19] algorithms, and the image segmentation accuracy is below 0.60. Literature [15] and Literature [17] have relatively high accuracy rates, especially Literature [6] can reach up to 0.76, while the segmentation accuracy of this method is between 0.95–0.99. It can be seen that under different data size conditions, the segmentation accuracy of other Literature algorithms is far below the method in this paper. Therefore, it can be explained that the method in this paper has a better operation effect on capturing and storing boundary information and the operation effect of storing boundary information is good, which greatly improves the retention effect of image boundary information, and then improves the accuracy of image segmentation.
5 Conclusions
Computed Tomography sequence image is currently an important method for detecting many diseases. The accuracy of computed tomography sequence image segmentation results affects the diagnosis and treatment effect. Therefore, it has important value. In the traditional manual segmentation process, segmentation has poor repeatability, time-consuming, and accuracy cannot be guaranteed. This paper proposes a computed tomography sequence image edge segmentation optimization algorithm using improved convolution neural network. The images that meet the requirements are divided into different types, and the edge information of the real image is extracted as much as possible. Edge information extraction and de-noising of computed tomography sequence images. Through the de-convolution layer and other network layers, the details and corresponding space of the target can be gradually recovered, so as to effectively maintain the resolution of the computed tomography sequence image and complete the optimal edge segmentation of the computed tomography sequence image. The results show that the overall performance of the proposed method is better, and it has certain reference value for related research in the field of image segmentation. However, this paper has done a lot of work in reducing algorithm complexity, there are still many shortcomings in large-scale computed tomography sequence image segmentation, and there is a lack of fast computed tomography sequence image segmentation algorithm. Therefore, in future works, we should focus on a fast segmentation method of large-scale computed tomography sequence image to improve segmentation efficiency.
References
- 1. Cho Youngbok and Woo Sunghee, Automated ROI Detection in Left Hand X-ray Images using CNN and RNN, International Journal of Grid and Distributed Computing, vol.11, no.7, pp. 81–92,2018.
- 2. Mohammed A. F. Al-Husainy and Hamza A. A. Al-Sewadi, Implementing Binary Search Tree Concept for Image Cryptography, International Journal of Advanced Science and Technology, vol. 130, pp. 21–32,2019.
- 3. Yu Jun, Li Jing, Yu Zhou, Huang Qingming, Multimodal Transformer with Multi-View Visual Representation for Image Captioning, IEEE Transactions on Circuits and Systems for Video Technology, , 2019.
- 4. Yu Jun, Tan Min, Zhang Hongyuan, Tao Dacheng, Rui Yong, Hierarchical Deep Click Feature Prediction for Fine-grained Image Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, , 2019. pmid:31380745
- 5. Jin Yi, Guo Xingyan, Li Yidong, Xing Junliang, Hui Tian, Towards Stabilizing Facial Landmark Detection and Tracking via Hierarchical Filtering: A new method, Journal of the Franklin Institute,
- 6. Mushtaq Saba and Mir Ajaz Hussain, "Image Copy Move Forgery Detection: A Review", International Journal of Future Generation Communication and Networking, vol. 11, no. 2, pp. 11–22, 2018.
- 7. Ryu Ho and Moon Il-Young, "Measuring Similarity Analysis for Image Verification", International Journal of Grid and Distributed Computing, vol.11, no.7, pp. 53–62,2018.
- 8. Jeong YunSang, An JiHye and Park Jinho, "An Effective Image Distortion Correction Method using Depth Sensor", International Journal of Control and Automation, vol. 11, no. 4, pp.21–30, 2018.
- 9. Yin Yuyu, Xia Jing, Li Yu, Xu Yueshen, Xu Wenjian, Yu Lifeng, Group-Wise Itinerary Planning in Temporary Mobile Social Network. IEEE Access 7: 83682–83693,2019.
- 10. Gao Honghao, Xu Yueshen, Yin Yuyu, Zhang Weipeng, Li Rui, Wang Xinheng. Context-aware QoS Prediction with Neural Collaborative Filtering for Internet-of-Things Services.IEEE Internet of Things Journal, 2019, https://doi.org/10.1109/JIOT.2019.2956827.
- 11. Li JX, Zhang XK, Wang Z, Chen XM, Chen J, Li YS, et al. Dual-band eight-antenna array design for MIMO applications in 5G mobile terminals. IEEE Access, vol,no.1,pp.71636–71644,2019.
- 12. Peizhi Wen, Yuanyuan Miao, Ying Zhou, et al. An improved image automatic segmentation method based on convolutional neural network. Computer application research, vol.35,no.9,pp. 294–298,2018.
- 13. Chenyue Wu, benshun Yi, Yungang Zhang, et al. Retinal vascular image segmentation based on improved convolutional neural network. Acta optica Sinica, vol.38,no.11, pp. 125–131,2018.
- 14. Botao Xing, Qiang Li, Xin Guan. Brain tumor image segmentation based on improved full convolution neural network. Signal processing, vol.12,no.8, pp.911–922,2018.
- 15. Jialin Peng, Ping Jie. Liver CT image segmentation based on prior constraint between sequences and multi view information fusion. Journal of electronics and information, vol.40,no.4,pp. 971–978,2018.
- 16. Wang L, Li S, Sun Z, et al. Segmentation of yeast cell’s bright-field image with an edge-tracing algorithm. Journal of Biomedical Optics, vol.23, no.11, pp.116503.1–116503.7, 2018. pmid:30456935
- 17. Miao Liao, Yizhi Liu, Yang Ou, Lin Jun, et al. Computer aided design and graphics for automatic segmentation of liver tumors based on non-linear enhancement and graph cuts. Journal of computer aided design and graphics, vol.9,no.6,pp. 1030–1038,2019.
- 18. Li Li Ding, Shuang Jiang Shuang, Dan Zhou. High precision region segmentation of CT image features. Biomedical engineering research, vol.37,no.4,pp. 129–132, 2018.
- 19. Dong Hun K, Eomzi Y, Tae Sup Y. Stokes〣rinkman Flow Simulation Based on 3D μ〤T Images of Porous Rock Using Gray㏒cale Pore Voxel Permeability. Water Resources Research, vol.55,no.5, pp. 4448–4464,2019.
- 20. Shu J, Chen Z, Xu C. Feature Segmentation for Blurred Edge of Ship Image Based on Depth Learning. Journal of Coastal Research, vol.83, no.7, pp.781–785,2018.
- 21. Xia S, Zhu H, Liu X, et al. Vessel Segmentation of X-Ray Coronary Angiographic Image Sequence. IEEE Transactions on Biomedical Engineering, vol.99,pp.1–1,2019. pmid:31494537
- 22. Yin Yuyu, Chen Lu, Xu Yueshen, Wan Jian, Zhang He, Mai Zhida. QoS Prediction for Service Recommendation with Deep Feature Learning in Edge Computing Environment. Mobile Networks and Applications. 2019. https://doi.org/10.1007/s11036-019-01241-7.
- 23. Li Xiaofeng, Sui Jun and Wang Yanwei, Three-Dimensional Reconstruction of Fuzzy Medical Images Using Quantum Algorithm, IEEE Access, vol. 8, pp. 218279–218288, 2020.
- 24. Piao Weng, Yanhui Lu, Xianbiao Qi, et al. Pavement crack segmentation technology based on improved full convolution neural network. Computer engineering and application, vol.12,no.16,pp. 235–239,2019.
- 25. Ding Fan, Yide Hu, Jiankang Huang, et al. Defect recognition method for X-ray image of pipe weld based on improved convolution neural network. Journal of welding, vol.41,no.1, pp. 7–11,2020.
- 26. Rampun A, Scotney B W, Morrow P J, et al. Segmentation of breast MR images using a generalised 2D mathematical model with inflation and deflation forces of active contours. Artificial Intelligence in Medicine, vol.97, no.6, pp. 44–60,2019. pmid:30420243
- 27. Seo H, Huang C, Bassenne M, et al. Modified U-Net (mU-Net) With Incorporation of Object-Dependent High Level Features for Improved Liver and Liver-Tumor Segmentation in CT Images. IEEE Transactions on Medical Imaging, vol.39, no.5, pp. 1316–1325,2020.
- 28. Li Y, Ho C P, Toulemonde M, et al. Fully Automatic Myocardial Segmentation of Contrast Echocardiography Sequence Using Random Forests Guided by Shape Model. IEEE Transactions on Medical Imaging, vol.37, no.5, pp. 1081–1091,2019.
- 29. Devikanniga D., “Diagnosis of osteoporosis using intelligence of optimized extreme learning machine with improved artificial algae algorithm,” International Journal of Intelligent Networks, vol. 1, pp. 43–51, 2020.
- 30. Assad M. B. and Kiczales R., “Deep Biomedical Image Classification Using Diagonal Bilinear Interpolation and residual network,” International Journal of Intelligent Networks, vol. 1, pp. 148–156, 2020.
- 31. Li Xiaofeng. Hongshuang Jiao, Dong Li. Intelligent Medical Heterogeneous Big Data Set Balanced Clustering Using Deep Learning. Pattern Recognition Letters. vol.138, pp.548–555,2020.
- 32. Silkwood J, Matthews K, Shikhaliev P,. TU-E-217BCD-04: Spectral Breast CT: Effect of Adaptive Filtration on CT Numbers, CT Noise, and CNR. Medical physics, vol.39,no.24, pp.3914–3920,2018.