Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A deep learning-based dynamic deformable adaptive framework for locating the root region of the dynamic flames

  • Hongkang Tao,

    Roles Data curation, Formal analysis, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliation School of Advanced Manufacturing, Nanchang University, Nanchang, China

  • Guhong Wang,

    Roles Data curation, Software

    Affiliation School of Advanced Manufacturing, Nanchang University, Nanchang, China

  • Jiansheng Liu ,

    Roles Funding acquisition, Resources, Software, Writing – review & editing

    liujianshengncdx@163.com (JL); yangzan@ncu.edu.cn (ZY)

    Affiliations School of Advanced Manufacturing, Nanchang University, Nanchang, China, Research Center of Manufacturing Industry Information Engineering Technology of Jiangxi Province, Nanchang, China

  • Zan Yang

    Roles Data curation, Funding acquisition, Writing – original draft, Writing – review & editing

    liujianshengncdx@163.com (JL); yangzan@ncu.edu.cn (ZY)

    Affiliations School of Advanced Manufacturing, Nanchang University, Nanchang, China, Jiangxi Tellhow Military Industry Group Co., Ltd., Nanchang, China

Abstract

Traditional optical flame detectors (OFDs) in flame detection are susceptible to environmental interference, which will inevitably cause detection errors and miscalculations when confronted with a complex environment. The conventional deep learning-based models can mitigate the interference of complex environments by flame image feature extraction, which significantly improves the precision of flame recognition. However, these models focus on identifying the general profile of the static flame, but neglect to effectively locate the source of the dynamic flame. Therefore, this paper proposes a novel dynamic flame detection method named Dynamic Deformable Adaptive Framework (DDAF) for locating the flame root region dynamically. Specifically, to address limitations in flame feature extraction of existing detection models, the Deformable Convolution Network v2 (DCNv2) is introduced for more flexible adaptation to the deformations and scale variations of target objects. The Context Augmentation Module (CAM) is used to convey flame features into Dynamic Head (DH) to feature extraction from different aspects. Subsequently, the Layer-Adaptive Magnitude-based Pruning (LAMP) where the connection with the smallest LAMP score is pruned sequentially is employed to further enhance the speed of model detection. More importantly, both the coarse- and fine-grained location techniques are designed in the Inductive Modeling (IM) to accurately delineate the flame root region for effective fire control. Additionally, the Temporal Consistency-based Detection (TCD) contributes to improving the robustness of model detection by leveraging the temporal information presented in consecutive frames of a video sequence. Compared with the classical deep learning method, the experimental results on the custom flame dataset demonstrate that the AP0.5 value is improved by 4.4%, while parameters and FLOPs are reduced by 25.3% and 25.9%, respectively. The framework of this research extends applicability to a variety of flame detection scenarios, including industrial safety and combustion process control.

1 Introduction

The lives and property of individuals are directly impacted by fire safety. In complex outdoor environments, the rapid spread of fire will cause casualties and property loss. Thus, timely detection of flame becomes exceptionally crucial. Optical flame detectors (OFDs) employ sensors to monitor specific light frequencies for flame detection. OFDs are capable of successfully fusing particular feature fusion algorithms, such as scale invariant feature transform (SIFT) [1], flame detection algorithm based on multi-feature fusion (FDAMF) [2], and fire color feature extraction (FCFE) [3], with optical sensor hardware, such as [4, 5], and so on. These OFDs, i.e., infrared [6], ultraviolet [7], and infrared/ultraviolet [8], are frequently used in the community of firefighting [9], medical [10], and agricultural [11]. OFDs integrate optical sensors and algorithms for flame detection in diverse applications. Naturally, the valuable signals obtained from optical sensors in OFDs can be effectively combined by these feature fusion algorithms, enhancing the overall capability of flame detection systems.

However, the OFDs face limitations regarding their detection process, susceptibility to interference in complex environments, and compatibility with advanced devices. For the first one, due to certain characteristics such as signal transmission and communication interface requirements, OFDs adopt multi-stage detection methods. These methods include sequential processes like feature detection, feature-to-signal transformation, and target recognition, which collectively lead to a low detection speed. For the second one, false detections in outdoor environments may be influenced by objects with similar flame characteristics. (e.g., red roses, fire engine, sun, and among others). For the third one, the algorithms of traditional OFDs may struggle to effectively locate the root region of dynamic flames that exhibit rapid changes in shape, intensity, or movement. Eventually, these computationally complex algorithms of OFDs are not well-suited for contemporary advanced devices, leading to a sluggish device detection process.

To overcome the low robustness and slow detection speed in the existing OFDs, the one-stage deep learning (DL)-based object detection methods, which recognize objects directly, have gained wide attention. They are effective in many fields, such as intelligent security [12], intelligent transportation [13], and intelligent firefighting [14]. Specifically, the DL model automatically discovers and extracts the most relevant and salient features of each object class through neural networks training data [15]. Moreover, DL-based methods are generally end-to-end models, i.e. [13, 16, 17], where the target features are directly extracted from the input data, and thus the computational efficiency is greatly improved significantly. Currently, the availability of abundant computational resources, such as GPU and TPU, enables faster inference speeds for deep learning algorithms, making their application on devices feasible [18]. Therefore, Deep learning-based methods are more compatible for integration with existing optical sensing devices. Additionally, DL-based methods are becoming increasingly mature in various industrial applications [19, 20], further confirming the effectiveness of such methods.

In many studies, the You Only Look Once (YOLO) family of DL-based algorithms [2123], such as YOLOv5 [24], possess characteristics of end-to-end and multi-scale object detection. These characteristics render it exceptional in terms of the level of speed and real-time detection. However, when faced with flame detection, DL-based YOLO has limitations in real-time detecting dynamic flames efficiently where the landscape of the dynamic flame is uncertain. Concretely, the current DL-based YOLO in flame recognition has primarily focused on balancing model accuracy and efficiency, often overlooking the enhancement of robustness in flame detection models. More importantly, recent studies [25, 26] suggest that the robustness of object detection models is a critical factor in designing new models, however, the existing flame detection methods have not explicitly considered how to improve the robustness. Thus, the research on flame detection has not been designed based on the two types of key features of flame, i.e., dynamic recognition and flame root region localization. Therefore, the deformable convolution network v2 (DCNv2) [27], context augmentation module (CAM) [28], dynamic head (DH) [29], and temporal consistency-based detection (TCD) [30], are effectively integrated into YOLOv5 to address these challenges. Specifically, DCNv2 employs deformable convolutional offsets to enhance the extraction of flame image features across different scales. CAM employs an adaptive fusion method to effectively combine the features obtained from multi-scale expanded convolutions. DH integrates three self-attention mechanisms, i.e., scale-aware, spatial-aware, and task-aware, to generate real-time results. TCD determines whether targets are real or not by focusing on their consistency across consecutive frames. These techniques mitigate the complexities of the environment effectively and enhance the precision of dynamic flame detection significantly. After integrating the four techniques, the pruning technique [31] is introduced to accelerate the detection speed. Then the ability to localize the flame root region has been improved by the inductive modeling (IM) approach [32, 33] when the precision is enhanced. Research findings indicate that targeting the flame root region for fire suppression enables more effective control of the fire [3436]. Therefore, this paper proposes a novel dynamic flame detection method named Dynamic Deformable Adaptive Framework (DDAF) for locating the flame root region dynamically.

The main contributions of this paper can be summarized as follows.

  1. The proposed Dynamic Deformable Adaptive Framework achieves the localization of the flame root region by detecting the flame. This improvement is essential for achieving more effective fire suppression and control.
  2. To overcome the limitations of existing algorithms in OFDs, this paper introduces four techniques i.e., deformable convolution network v2 (DCNv2), context augmentation module (CAM), dynamic head (DH), and temporal consistency-based detection (TCD), and integrate them into YOLOv5 to improve dynamic feature extraction capability and detection robustness. These techniques enhance the precision of real-time detection for dynamic flames, even in uncertain flame landscape conditions.
  3. In this study, a pruning technique is applied to remove redundant parameters from the model, thus improving the flame detection speed.
  4. This paper successfully combines the inductive modeling (IM) method with YOLOv5 to model the flame landscape, enabling the effective location of the flame root region.

The remainder of this paper is organized as follows. Section 2 briefly introduces the related work and background technique. Section 3 introduces the DDAF. Section 4 introduces the experimental results and analysis. Finally, Section 5 concludes this paper.

2 Related works and background techniques

2.1 Related works

This study builds on several previous works: conventional algorithms in OFDs, the YOLO-based end-to-end algorithms, and object detection algorithms for temporal consistency. The YOLO-based end-to-end algorithms have successfully tackled the limitations of conventional algorithms in the OFDs, and the TCD technique offers a solution to address the specific limitations of the YOLO-based method in flame detection.

OFD is a significant piece of devices for ensuring fire safety. The algorithms integrated into OFDs play a crucial role in enhancing their flame detection capabilities. During the past decades, various algorithms for OFDs have been proposed, including those based on infrared [4, 6, 37], ultraviolet [7, 38, 39], and infrared/ultraviolet [5, 8, 40] technologies. Liu et al. proposed a detector of flame/smoke video image detection system consisting of one infrared camera and others [6]. Truong et al. proposed a low-cost and reliable smart fire alarm system that utilizes ultraviolet detection technology [7]. Genovese et al. proposed an image processing system for the detection of wildfire smoke based on computational intelligence techniques by infrared/ ultraviolet cameras [8]. They all detect flames by the characteristic wavelengths of the fire. These wavelengths are transformed into characteristic signals. Then, the detector combines the signals from both sensors and is used to determine if the target is a flame based on a pre-defined threshold. However, the quantification of flame features with two-dimensional data [41] and dependence on threshold settings [6] will result in limited detection speed and inadequate adaptability for different environments respectively. This highlights the need for advancements in flame detection technologies, motivating the exploration of alternative approaches.

Over the years, the YOLO [2123, 4244] family has been one of the popular one-stage real-time object detectors. YOLO detectors can be found in many hardware platforms and application scenarios, meeting different requirements. After years of development, YOLO has evolved into a series of high-speed models demonstrating strong performance. Compared to multi-stage detection algorithms, the YOLO-based object detection algorithms, such as YOLOv4 [21], YOLOv5 [24], and YOLOv7 [44], demonstrate significantly faster object detection speed. In recent years, some YOLO-based algorithms have been applied to flame detection. Zhao et al. [45] proposed an improved Fire-YOLO deep learning algorithm for the detection of fire targets in forest fire images. Lestari et al. [46] proposed a method that can monitor an area of fire in a building. Goyal et al. [47] used both deep learning and infrared cameras to monitor the forest and surrounding area. Xiao et al. utilized the YOLOv5 deep neural network to develop a detection system for early fire warning in monitoring substations [48]. They are very effective in detecting static flames. However, when confronted with dynamic flames, this type of YOLO-based algorithms perform poorly in terms of detection precision since the landscape of flames varies over time.

Recently, a growing number of scholars have shifted their focus to dynamic flames. For instance, Avazov et al. [49] proposed a method that relies on a lightweight CNN model and an enhanced version of YOLOv3 for detecting dynamic flames in shipyard areas. Li et al. [50] concentrated on utilizing diverse motion detection methods, such as adaptive background subtraction and motion history images, for the effective identification of dynamic flames. Wang et al. [51] employed a diverse set of methodologies, incorporating visualized heat release rate prediction, root mean square error and mean absolute error comparisons, as well as an analysis of detection time, to enhance the accuracy and efficiency of dynamic flame detection systems. Despite the effectiveness of these methods in detecting dynamic flames, they often overlook the crucial aspect of precisely locating the root region of the flame, which is a more effective area for firefighting purposes.

Based on existing literature, current research in flame recognition has primarily focused on balancing model accuracy and efficiency, often overlooking the enhancement of robustness in flame detection models. However, recent studies [5254] suggest that the robustness of object detection models is a critical factor in designing new models, and the temporal consistency-based detection (TCD) algorithms perform excellent in improving robustness. This type of algorithm ensures that the recognized flames do not change abruptly in successive frames. TCD algorithms are commonly used in computer vision [30, 55]. These algorithms aim to enhance the precision and robustness of detecting objects by leveraging the temporal information presented in consecutive frames of a video sequence. Nishimura et al. [52] proposed a semi-supervised cell-detection method that uses a time-lapse sequence. Jeong et al. [54] introduced a consistency-based semi-supervised learning approach for object detection. Xiao et al. [53] presented a method for detecting adversarial frames based on the temporal consistency property of videos. They focus on ensuring that detected objects demonstrate a consistent appearance and behavior of motion across multiple frames. Compared to other object detection algorithms, these methods exhibit an improved capability to differentiate between truly objects and false detections by taking into account the temporal context.

To further demonstrate the characteristics of the related studies, a summary of the algorithms related to the proposed DDAF is given in Table 1. It can be seen that the research on flame detection has not been designed based on the two types of key features of flame, i.e., dynamic recognition and flame root region localization. More importantly, the existing flame detection methods have not explicitly considered how to improve the robustness, however, the research on robustness receives widespread attention on other object detection scenarios. Therefore, this paper takes these three limitations as the starting points to designed an effective object detection framework tailored for dynamic flame root region detection. Specifically, the integration of DCNv2, CAM, and DH effectively extracts flame features. Moreover, the pruning technique significantly improves flame detection speed. IM achieves notable success in flame root region localization and TCD enhances flame detection robustness.

2.2 Background techniques

2.2.1 Upsample.

Upsample is a technique that used to map low-resolution images to higher resolutions. This study employs nearest-neighbor interpolation for the up-sampling process (details are shown in Fig 1). Given an original image F with dimensions M × N and a desired up-sampled size represented as P×Q(P>M,Q>N). The up-sampled image is denoted as G and its corresponding pixel position is denoted as G(i,j). This process is indispensable in DDAF, particularly in handling intricate details of images, as encountered in flame detection, expressed mathematically as follows: (1) where round(∙) denotes the process of rounding off the values, i and j represent the row and column indices of the image after upsample.

2.2.2 Conv blocks.

Conv blocks are consisted of residual convolutional blocks, where the convolution operation extracts feature from input data, the activation function introduces non-linear elements, and the BN layer aids in the training of the network. These blocks are defined as: (2) where BN denotes batch normalization, Siou represents the activation function, Conv3×3 denotes convolution operation with a kernel size of 3×3. Following the design principles, these convolutions are dense.

2.2.3 Spatial pyramid pooling features.

SPPF serves to perform a multi-scale pooling operation on the input feature graph to capture semantic information at different scales. Details are shown in Fig 2.

To begin with, SPPF takes the input feature map denoted as x, and applies a convolution operation to reduce the channel dimensions by half for alleviating computational load. Then MaxPool2d operations are conducted with kernel sizes of k = 5, k = 9, and k = 13 on the downsized feature maps to generate y1, y1, and y3 respectively. These feature maps capture semantic information at various scales, corresponding to different levels of detail. Subsequently, a concat operation is executed to concatenate the original feature map x with y1, y2, and y3. Finally, a k = 1×1 convolution operation is performed to adjust the channel dimensions of the concatenated feature maps.

3 Dynamic Deformable Adaptive Framework (DDAF)

Xu et al. (2019) proposed a method that includes adaptive spatial feature selection and temporal consistency constraints, enabling joint spatial-temporal filter learning in a lower-dimensional discriminative manifold. Inspired by this work, this paper proposes the DDAF framework. It aims to mitigate the limitations of slow and false detection of conventional OFDs, as well as the detection of dynamic flames varies over time.

3.1 Architecture

As shown in Fig 3, the image of size 640*640 is fed at the input. The DDAF framework consists of three parts: Backbone, Neck, and Head. They are responsible for extracting flame features, enhancing and fusing these features, and generating target detection results, respectively. In the Backbone network, the DCNv2 structure enables the network to extract features at different scales, followed by the concatenate (Concat) operation, and finally processes the output feature vector using SPPF. The Neck network uses FPN+PAN+CAM structure for the adaptive fusion of deep and shallow network features, thus improving the quality of features extracted from the target. The feature pyramid network (FPN) [56] structure passes deep semantic features downward from top to bottom (top-down). It effectively utilizes multi-scale feature maps for improved precision and robustness in detecting objects of different sizes within an image. The pixel aggregation network (PAN) [57] structure bottom to up (bottom-up) complements the FPN by passing the low-level localization features upward. Then, the CAM integration into FPN performs an adaptive fusion of features by using expanded convolution with varying rates. The integration of FPN, PAN, and CAM structures into the Neck network enables adaptive fusion, leveraging multi-scale features, complementing low-level features, and adaptively merging features to improve the accuracy and robustness of flame detection. The Head network employs dynamic head (DH) detection to improve the detection of dynamic flames effectively, then combines IM and temporal consistency-based detection to achieve stable root region localization. Eventually, a pruning algorithm is used to improve the detection speed of the model. In summary, the deformable convolutional network v2 (DCNv2) contributes to flexible feature extraction, while the Context Augmentation Module (CAM) enhances contextual information. The Dynamic Head (DH) provides effective detection of dynamic flames, complemented by Inductive Modeling (IM) and Temporal Consistency-based Detection (TCD) for stable root region localization. Together, these components ensure a comprehensive and adaptive approach to dynamic real-time flame detection.

thumbnail
Fig 3. Illustrate of the DDAF.

This study primarily uses DCNv2 for feature extraction, CAM for fusion, and DH for dynamic flame detection. Additional details are presented in the backbone, neck, and head. Best viewed in color.

https://doi.org/10.1371/journal.pone.0301839.g003

3.2 Deformable convolution network v2

DCNv2 [27] introduces the modulation mechanism into the standard deformable module [58]. This modulation mechanism allows convolutional kernels to dynamically adjust their shape based on input features, thereby improving flexibility in capturing spatial details and enhancing feature extraction capabilities, and the modulated deformable convolution is reformulated as: (3) where Δpk and Δmk are the learnable offset and modulation scalar for the k-th location, respectively. As shown in Fig 4, both the offset Δpk and modulation Δmk are obtained via a separate convolutional layer applied over the same input feature maps x with 2K and K output channels respectively. Additionally, to enhance the model of ability for geometric transformation, DCNv2 replaces 10 more plain counterparts than the setting of DCNv1 in the ResNet [59] with deformable convolution.

thumbnail
Fig 4. Illustrate of deformable convolutional.

Input feature map with 2K and K output channels. The 2K portion represents sampling offsets, and the K portion represents channel weights. The final aggregation results in the output feature map. Best viewed in color.

https://doi.org/10.1371/journal.pone.0301839.g004

3.3 Context augmentation module

The small landscape of the flame is difficult to detect as usual, necessitating a wealth of feature information. In this paper, in order to enhance the information fusion of different feature layers for tiny objects, the CAM [28] structure is added. The CAM structure is improved from the FPN structure, and the main function is to adaptively (c) learn the weights of feature fusion across various levels. The details of CAM can be found in Fig 5.

thumbnail
Fig 5. Illustrate the adaptive fusion model.

Best viewed in color. Method (a) and (b) are weighted fusion and concatenation operation respectively. Method (c) is an adaptive fusion method.

https://doi.org/10.1371/journal.pone.0301839.g005

The incorporation of the CAM (Contextual Attention Module) structure is an enhancement for information fusion. By employing dilated convolutions with rates of 1, 3, and 5, the CAM effectively broadens its receptive field. The rationale behind using dilated convolutions lies in their ability to increase the convolutional kernel’s receptive field, thereby enhancing the model’s capacity to capture contextual information. This deliberate expansion of the receptive field is essential for overcoming the challenge of detecting small flames, as it enables the model to gather spatial context and intricate details associated with varying flame sizes. The kernel size is 3×3, and the rates are 1, 3, and 5. The (c) is an adaptive fusion method. Specifically, assuming the input has a size of (bs, C, H, W), convolution operations can produce spatial adaptive weights with a shape of (bs, 3, H, W). Methods (a) and (b) are weighted fusion and concatenation of channel dimensions, respectively. The CAM effectively addresses the challenge of detecting small flames by capturing spatial context through varying receptive fields.

3.4 Dynamic head

The FPN is a detection structure that combines multi-scale convolution features. However, during the down-sampling process, there is potential for information loss in detecting small landscape flame targets. In contrast, a method known as DH [29] can effectively mitigate this limitation. Specifically, the method redefines the four-dimensional tensor L×H×W×C into a three-dimensional tensor L×S×C. It employs scale-aware, spatial-aware, and task-aware attention mechanisms across the L, S, and C dimensions to fuse features of different scales to selectively focus on key features. This key attention helps preserve essential information, especially in regions prone to loss during down-sampling. In flame detection, DH ensures precise localization of dynamic flames by effectively capturing real-time features. More detailed information can be found in Fig 6.

The feature tensor is F ϵ RL×S×C, where L denotes the number of pyramid layers, S denotes the size of the feature, and C denotes the number of channels. Moreover, S = H×W, where H, W denote the height and width of the feature. DH can be expressed as: (4) where πL(∙), πS(∙), and πLC(∙) correspond to scale-aware attention, spatial-aware attention, and task-aware attention, respectively. πL enables dynamic feature fusion based on the importance of the features in each layer. The expression is shown in Eq (3). (5) where f(∙) represents a 1×1 convolutional layer, and denotes a H-sigmoid function.

The sparsity is first learned using deformed convolution v2 [27], then the cross-level features are aggregated at the same spatial locations. The expression is shown in Eq (4). (6) where K is the number of sparse sampling positions, pkpk is the offset position when the self-learning spatial offset Δpk is focused on a specific region, and Δmk is the significant scalar at the self-learning position pk. The task-aware attention module dynamically opens or closes the feature channel to select different tasks with the expression shown in Eq (5). (7) where [α1,α2,β1,β2]T is a hyperfunction to learn control activation thresholds. It first performs a global pooling on the L×S dimensions to reduce dimensionality, then uses two fully connected layers and a normalization layer, and final normalizes by the sigmoid activation function. Global pooling facilitates the aggregation of feature maps across spatial dimensions, allowing for the capture of global context and a reduction in spatial information. The ensuing fully connected layers further transform these features into a vector, catering to classification or regression tasks. This sequence of operations enhances attention mechanisms, providing improved focus on specific perspectives, including scale, spatial relationships, and task-specific details. Consequently, this refined method contributes to an enhanced performance in flame detection. This type of attention integration module can be stacked based on Eq (5).

These pyramids can be scaled to the same size 3D tensor L×S×C. This tensor is then fed to the dynamic detection head, which consists of several DH blocks as shown in Fig 7. The output of the DH can be used for a variety of tasks, including classification and bounding box regression. The several DH blocks are arranged in the order of L, S, and C. Based on the number of DH blocks, this study compares the AP0.5, precision, and AP0.5:0.95, as shown in Table 9.

thumbnail
Fig 7. Connection scheme of DH blocks.

The πL, πS, and πC represent scale-aware, spatial-aware, and task-aware attention, respectively.

https://doi.org/10.1371/journal.pone.0301839.g007

3.5 Layer-adaptive magnitude-based pruning (LAMP)

LAMP [31] proposes a novel importance scoring method perspective for global pruning from the model-level distortion minimization. Specifically, each neural network layer can be considered as an operator for studying the model-level distortion produced by the pruned layers. Assuming the weights are sorted in ascending order according to the index map, LAMP apply it to each unexpanded vector without loss of generality, i.e., uv whenever |W [u]| ≤ |W [v]| holds, where |W [u]| denotes the W entry mapped by index u. The u-th LAMP score of the weight tensor W is defined as follows: (8)

Informally, the LAMP score (Eq 8) measures the relative importance of the target connection among all existing connections belonging to the same layer. Connections with smaller magnitudes (in the same layer) have been pruned. Therefore, two connections with the same weight magnitude will have different LAMP scores. Once the LAMP score is calculated, this algorithm prunes the connection with the smallest LAMP score globally until the desired global sparsity constraint is reached. The details are shown in Fig 8.

thumbnail
Fig 8. Illustration of pruning process of LAMP.

First, the structured pruning is performed using the layer-adaptive magnitude-based pruning (LAMP) method where the connection with the smallest LAMP score is pruned sequentially till the required global sparsity constraint is satisfied. This method significantly reduces the quantity of parameters. Then the clipped model is trained, i.e., finetune.

https://doi.org/10.1371/journal.pone.0301839.g008

3.6 Inductive modeling

In this paper, an inductive modeling method [32] based on the position of the detection box is proposed to mitigate the flame root region localization challenge with the following steps, as shown in Fig 9:

thumbnail
Fig 9. Visual representation of inductive modeling steps.

https://doi.org/10.1371/journal.pone.0301839.g009

  1. Feature extraction and bounding box generation: Through the feature fusion algorithm, which includes DCNv2, CAM, and DH, the flame is detected with the aim of finding the position of the flame and the bounding box information.
  2. Ratio-based coarse-grained location: Given the typical location of flames near the bottom of images, a preliminary localization of the root region within the bounding box is approximated based on this assumption.
  3. Fine-grained location based on inductive modeling: Considering both the position and size of the bounding box, thus a more precise localization of the flame root region is determined in Eq (9).
(9)

where (x, y) are the coordinates of the upper left corner of the detection box. w and h are the width and height of the bounding box. Scaling factors 1/κ and 1/λ are applied to w and h, respectively. The flame root region is defined by coordinates (m, n), with points set at a radius of 5 pixels.

4 Experiments

4.1 Experimental setup

The dataset utilized in this study is a combination of contributions from four works, i.e. [6063], where images of forest fires and images with flames filling the entire image are removed to validate the reliability of our flame detection method for urban fires. The flame dataset has a total of 4,248 images, with 3,811 images in the training set and 437 images in the validation set. The distribution ratio of custom dataset across different categories in shown in Fig 10. All experiments are first trained using the training set and validated using the validation set. The validation set help to determine if the model generalizes well and to assess the effectiveness of the flame detection application. All experiments are conducted under consistent environmental conditions and hyperparameters. The experimental environment is shown in Table 2.

thumbnail
Fig 10. Percentage distribution of images in each category.

https://doi.org/10.1371/journal.pone.0301839.g010

thumbnail
Table 2. Experimental environment parameters and setting.

https://doi.org/10.1371/journal.pone.0301839.t002

4.2 Evaluation indexes

As shown in Eqs (1013), the evaluation parameters which are defined in detail as follows: TP represents correctly identified positive instances, FP indicates falsely identified positives, TN denotes correctly identified negatives, and FN signifies falsely identified negatives. The confusion matrix synthesizes these indexes, enabling a comprehensive evaluation of model performance by computing accuracy, recall, precision, and F1 score. recall (R) and precision (P) are calculated from the confusion matrix shown in Table 3. R is the ability of the model to successfully detect all real flames, while P indicates how many of the flames detected by the model are real flames. The weighted average summed of precision and recall can be expressed as F1 score (F1). Compared to F1, the average precision (AP) reflects the overall detection performance of the model. It can be obtained by calculating the area under the corresponding precision-recall curve. This paper also uses frames per second (FPS) to characterize the timing detection performance of the algorithm. These indexes measure the comprehensiveness and accuracy of the model in identifying flames. In general, 24 FPS must be achieved to guarantee real-time detection [64]. To minimize the effect of potential outliers or fluctuations, the frames per second (FPS) value is the average FPS value of 30 separate runs with a batch size of 1. Calculating the average FPS value of these 30 separate runs involves adding up the FPS values for each run and dividing the total value by 30.

(10)(11)(12)(13)

4.3 Comparison of different object detection algorithms

In order to verify the flame detection algorithm, a comparative analysis of various object detection algorithms was conducted. In this study, we selected previous versions of YOLO [2123, 4244] as well as the SOTA YOLOv7-tiny [44] as baselines to assess the performance of our proposed model. Moreover, the existing derived models based on these baselines are employed to further validate the effects of our proposed contributions on YOLOv5s. Table 4 presents the comparison results between the proposed DDAF model and these compared models trained under the same settings.

thumbnail
Table 4. The results of different object detection algorithms.

https://doi.org/10.1371/journal.pone.0301839.t004

In Table 4, comparing all the algorithms, DDAF has the best performance on AP0.5 and AP0.5:0.95 with 0.814 and 0.493, respectively. It reflects the most advantageous in flame detection precision. Moreover, on classical indexes, DDAF obtains optimal results on F1 values, only performs slightly worse to YOLOv7-tiny on P values, and is superior to most methods on R values. This shows that the DDAF performs highly competitive results on classical indexes. Although DDAF is obviously lower than other comparative algorithms in the FPS values, but it still satisfies the base speed requirements (e.g., FPS≥24) for real-time firefighting scenarios [64].

As shown in Table 6, in order to further improve the detection precision of DDAF, the data augmentation (AD) technique is integrated into DDAF* called DDAF+. For further enhance the model performance of DDAF, a series of pilot experiments of pruning are conducted to systematically optimize several key hyperparameters such as speed up, finetune epochs, and learning rates. Other detailed experimental configurations are provided in Table 5. Table 6 shows the results of the comparison for DDAF*, DDAF+ and DDAF.

thumbnail
Table 6. The results of different object detection algorithms.

https://doi.org/10.1371/journal.pone.0301839.t006

In Table 6, the detection precision of DDAF+ is improved to 0.82. However, the limitation on the FPS value has not alleviated. Expanding on this groundwork, we further incorporated pruning technique, resulting in a detection speed of 89.6 FPS, with substantial enhancements in both parameter and FLOPs reduced. Compared to DDAF*, the FPS of DDAF is improved by 261.8%. It’s worth noting that this advanced algorithm is referred to as DDAF. The results show that the AP0.5 on our custom dataset is increased to 0.826 while concurrently reducing parameters and FLOPs by 49.0% and 33.5%, respectively. In summary, the DDAF algorithm has achieved a great balance between detection precision and speed.

4.4 Ablation study

In order to verify the effectiveness of each method proposed in this paper, ablation experiments were conducted and the results are shown in Table 7. DCNv2, CAM and DH are the three main components of DDAF. We add them to the baseline incrementally to compare the effectiveness of each component.

thumbnail
Table 7. Ablation comparison of model performance improvement on the custom dataset.

https://doi.org/10.1371/journal.pone.0301839.t007

Different components have similar effects. For example, both DCNv2 and CAM are scale-aware. Compared to DCNv2 and CAM, their combination improved 2.14% AP0.5 and 2.01% AP0.5:0.95, respectively. DCNv2 and DH are both sensitive to different objects, especially medium and large objects, and the combination of the two reached the second highest 0.804 AP0.5, which is a little bit lower than that of DH. The combination of CAM and DH achieved not only a 0.808 AP0.5 but also the second-highest precision. Each of these three components has its own strengths and weaknesses. It is evident that the overall performance, particularly in terms of AP, is optimized when all three components are utilized together. The experiment proves that compared with the baseline, DDAF comprehensively improves the average accuracy of targets at different scales, and can effectively improve the detection of small-scale targets. In conclusion, DDAF can effectively improve the real-time detection precision of dynamic flames.

Compared to the baseline, DCNv2 improves AP0.5 and AP0.5:0.95 by 1.66% and 0.42%, respectively. The reason is that DCNv2 allows each pixel in the input feature map to have an adaptive receptive field. This adaptation allows for better capture of complex object details and handling variations in object shape and size. However, above the FPS, it is about 14 frames lower than the baseline. On the one hand, the increased model parameters result in the need for more computation and storage requirements. On the other hand, increased memory and bandwidth requirements lead to slower data transfer.

The CAM also achieves good performance with its performance being 1.79% AP0.5 and, 3.21% P and FPS slightly lower than the baseline while its FLOPs are minimal. The mechanism of adaptive expanded convolution fusion reduces the computational burden of the model. However, the performance of CAM on AP0.5:0.95 performs only about the same as the baseline. The reason for this is that performing expanded convolution to select different rates discards some of the valid information, thus reducing part of the model’s ability to capture details in the image.

The DH we utilized had the most significant benefits of all the components. Increasing DH compared to the baseline improves AP0.5 and AP0.5:0.95 by 3.96% and 2.51%, respectively. This is due to the fact that the three attention mechanisms adaptively fuse multiple layers of features, which enhances the ability of the probe head to discriminate different feature points, thus improving the perceptual ability of the model. At the same time, DH also brings about a serious reduction in FPS, which is due to the inclusion of additional learnable parameters, i.e., newly added weights and biases, which require more memory to store it.

4.4.1 Ablation experiments for CAM fusion models.

In order to verify that the adaptive module is more applicable on Head, this study compares it with the two modules of weighting and splicing on Backbone and Head respectively. The experimental results are shown in Table 8. Although the performance of the adaptive fusion model is best on Backbone, the parameter and FLOPs are too large, which results in decreasing the detection speed of the algorithm. The performance of the adaptive fusion model on Head is only slightly lower than that on Backbone. Therefore, we choose to use the adaptive fusion model on Head.

thumbnail
Table 8. Compare these three fusion models on Head and Backbone.

https://doi.org/10.1371/journal.pone.0301839.t008

4.4.2 Ablation experiments for the number of DH block stacks.

This section explores the best detection results by controlling the number of stacked DH blocks. On the home-made dataset, we found that using 8 DH blocks produced the best results. We designed the following experiments using 2, 4, 6 and 8 DH blocks for comparison. As shown in Table 9, it is clear that the highest performance improvement is obtained by using 8 DH blocks. Therefore, DDAF chose 8 DH blocks.

4.4.3 Ablation experiments for data augmentation types.

As shown in Table 10, we found during our experiments that the model continues to improve in terms of our model performance with the moderate data augmentation approach in YOLOv5s. The hyperparameters for all three data enhancements are YOLOv5s defaults.

4.5 Experiment for temporally consistent video processing

The aim of this section is to determine whether a target is real or not by focusing on the consistency of the target’s appearance and motion of the detected object in consecutive video frames. Specifically, if the same target is consistently detected for a certain number of identical images (consecutive frames) in this consecutive time frame (jumping frames) it is recognized as the flame to be detected, and conversely, if it is not consistently detected for a certain number of frames inside this number of frames it goes into a loop for the next number of frames. We performed try and error on the flame video with the parameter settings of the number of consecutive time frames and the number of continuously detected frames, and the experiments showed that the best flame detection performance in the outdoor scene was achieved when the number of jumping frames and the number of consecutive frames were set to 30 frames and 10 frames, respectively. The experimental results are shown in Fig 11.

4.6 Try and error for IM

In order to systematically explore the flame root region, we integrated the previous flame literature based on inductive modelling to derive a range of regions for κ and λ. We also used a stepwise approach to iteratively experiment with a range of values for κ and λ. Specifically, the values of κ were taken in the range of [1.8, 2.2] in increments of 0.1, and the values of λ were taken in the range of [1, 2], again in increments of 0.1. This exhaustive search strategy provided a comprehensive assessment of root region localization through a thorough examination of multiple parameter combinations. We conducted experiments on 437 images of the validation set and the experimental results yielded that κ = 2, λ = 1.3 is the most reasonable. This method was able to precisely mark the flame root area during the detection stage, and the picture below shows the test we did in a real scenario. The experimental results are shown in Fig 11.

5 Conclusions

This paper proposed an algorithm called DDAF to mitigate the limitations of the algorithms in traditional optical flame detectors (OFDs) where dynamic real-time flame detection was achieved by integrating four techniques, i.e., deformable convolution network v2 (DCNv2), context augmentation module (CAM), dynamic head (DH), and temporal consistency-based detection (TCD) into yolov5s. The pruning method of LAMP was also used to lighten the model and improve the model detection speed.

This study compared the performance of different object detection algorithms. The results showed that under the same setup conditions, the detection accuracy of DDAF was better than all the compared algorithms, and the number of parameters and FLOPs were reduced by 49.0% and 33.5%, respectively. The method used in this paper achieved a good balance between detection performance and detection speed. The proposed technology framework used in this paper was also applicable to other tasks that had special requirements for flame detection, such as wildfire monitoring, industrial safety, and combustion process control.

In the future, the construction of standard flame datasets for different firefighting scenarios may be more suitable for validating the generalization ability of the algorithm. The hyperparameters can be further optimized to improve the accuracy of flame detection. Some special noise images should be added to the training set to further validate the robustness of the algorithm. Moreover, the iterative improvement techniques may be more suitable to merged into DDAF for providing efficient adaption to changing technological environment or emerging challenges. Furthermore, other advanced pruning or knowledge distillation techniques may also able to provide efficient abilities in lightweight.

References

  1. 1. Chen Y, Xu W, Zuo J, Yang K. The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier. Cluster Comput. 2019;22: 7665–7675.
  2. 2. Zhang J, Zhuang J, Du H, Wang S, Li X. A flame detection algorithm based on video multi-feature fusion. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics). 2006;4222 LNCS: 784–792.
  3. 3. Chen J, He Y, Wang J. Multi-feature fusion based fast video flame detection. Build Environ. 2010;45: 1113–1122.
  4. 4. Hao Z. Detection of rupture lines for active scanning. Opt Eng. 2007;46: 067205.
  5. 5. Weiler M, Bartl K, Gerhards M. Infrared/ultraviolet quadruple resonance spectroscopy to investigate structures of electronically excited states. J Chem Phys. 2012;136: 1–7. pmid:22443757
  6. 6. Liu Z, Hadjisophocleous G, Ding G, Lim CS. Study of a Video Image Fire Detection System for Protection of Large Industrial Applications and Atria. Fire Technology. 2012.
  7. 7. Truong CT, Nguyen TH, Vu VQ, Do VH, Nguyen DT. Enhancing Fire Detection Technology: A UV-Based System Utilizing Fourier Spectrum Analysis for Reliable and Accurate Fire Detection. Appl Sci. 2023;13.
  8. 8. Genovese A, Labati RD, Piuri V, Scotti F. Wildfire smoke detection using computational intelligence techniques. IEEE Int Conf Comput Intell Meas Syst Appl Proc. 2011; 34–39.
  9. 9. Kim JH, Jo S, Lattimer BY. Feature Selection for Intelligent Firefighting Robot Classification of Fire, Smoke, and Thermal Reflections Using Thermal Infrared Images. J Sensors. 2016;2016.
  10. 10. Hildebrandt C, Raschner C, Ammer K. An overview of recent application of medical infrared thermography in sports medicine in Austria. Sensors. 2010;10: 4700–4715. pmid:22399901
  11. 11. Hashimoto A, Kameoka T. Applications of infrared spectroscopy to biochemical, food, and agricultural processes. Appl Spectrosc Rev. 2008;43: 416–451.
  12. 12. Qiu M, Kung SY, Gai K. Intelligent security and optimization in Edge/Fog Computing. Futur Gener Comput Syst. 2020;107: 1140–1142.
  13. 13. Njoku JN, Nwakanma CI, Amaizu GC, Kim DS. Prospects and challenges of Metaverse application in data-driven intelligent transportation systems. IET Intell Transp Syst. 2023;17: 1–21.
  14. 14. Wu X, Zhang X, Jiang Y, Huang X, Huang GGQ, Usmani A. An intelligent tunnel firefighting system and small-scale demonstration. Tunn Undergr Sp Technol. 2022;120: 104301.
  15. 15. O’Mahony N, Campbell S, Carvalho A, Harapanahalli S, Hernandez GV, Krpalkova L, et al. Deep Learning vs. Traditional Computer Vision. Adv Intell Syst Comput. 2020;943: 128–144.
  16. 16. Blott M, Preuber TB, Fraser NJ, Gambardella G, O’Brien K, Umuroglu Y, et al. FinN-R: An end-to-end deep-learning framework for fast exploration of quantized neural networks. ACM Trans Reconfigurable Technol Syst. 2018;11.
  17. 17. Qi M, Shi Y, Qi Y, Ma C, Yuan R, Wu D, et al. A Practical End-to-End Inventory Management Model with Deep Learning. Manage Sci. 2023;69: 759–773.
  18. 18. Wang S, Zhao J, Ta N, Zhao X, Xiao M, Wei H. A real-time deep learning forest fire monitoring algorithm based on an improved Pruned + KD model. J Real-Time Image Process. 2021;18: 2319–2329.
  19. 19. Mao W, Zhang W, Feng K, Beer M, Yang C. Tensor representation-based transferability analytics and selective transfer learning of prognostic knowledge for remaining useful life prediction across machines. Reliab Eng Syst Saf. 2024;242: 109695.
  20. 20. Feng K, Xu Y, Wang Y, Li S, Jiang Q, Sun B, et al. Digital Twin Enabled Domain Adversarial Graph Networks for Bearing Fault Diagnosis. IEEE Trans Ind Cyber-Physical Syst. 2023;1: 113–122.
  21. 21. Bochkovskiy A, Wang C-Y, Liao H-YM. YOLOv4: Optimal Speed and Accuracy of Object Detection. 2020. Available: http://arxiv.org/abs/2004.10934
  22. 22. Redmon J, Farhadi A. YOLOv3: An Incremental Improvement. 2018. Available: http://arxiv.org/abs/1804.02767
  23. 23. Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2016;2016-Decem: 779–788.
  24. 24. Jocher G, Chaurasia A, Stoken A, Borovec J, NanoCode012, Kwon Y, et al. ultralytics/yolov5: v6.1—TensorRT, TensorFlow Edge TPU and OpenVINO Export and Inference. Zenodo; 2022.
  25. 25. Ni Q, Ji JC, Halkon B, Feng K, Nandi AK. Physics-Informed Residual Network (PIResNet) for rolling element bearing fault diagnostics. Mech Syst Signal Process. 2023;200: 1–16.
  26. 26. Wen L, Yang G, Hu L, Yang C, Feng K. A new unsupervised health index estimation method for bearings early fault detection based on Gaussian mixture model. Eng Appl Artif Intell. 2024;128: 107562.
  27. 27. Zhu X, Hu H, Lin S, Dai J. Deformable convnets V2: More deformable, better results. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2019;2019-June: 9300–9308.
  28. 28. Xiao Jinsheng, Zhao Tao, Yao Yuntao, Qiuze Yu YC. Context Augmentation and Feature Refine- Ment Network for Tiny Object Detection. Under Rev as a Conf Pap ICLR. 2022; 1–11. Available: https://openreview.net/pdf?id=q2ZaVU6bEsT
  29. 29. Dai X, Chen Y, Xiao B, Chen D, Liu M, Yuan L, et al. Dynamic Head: Unifying Object Detection Heads with Attentions. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2021; 7369–7378.
  30. 30. Chang HC, Hsu YL, Yang SC, Lin JC, Wu ZH. A wearable inertial measurement system with complementary filter for gait analysis of patients with stroke or Parkinson’s disease. IEEE Access. 2016;4: 8442–8453.
  31. 31. Lee J, Park S, Mo S, Ahn S, Shin J. Layer-Adaptive Sparsity for the Magnitude-Based Pruning. ICLR 2021 - 9th Int Conf Learn Represent. 2021; 1–19.
  32. 32. Sebastiani F. Machine Learning in Automated Text Categorization. ACM Comput Surv. 2002;34: 1–47.
  33. 33. Gioia DA, Corley KG, Hamilton AL. Seeking Qualitative Rigor in Inductive Research: Notes on the Gioia Methodology. Organ Res Methods. 2013;16: 15–31.
  34. 34. Caton SE, Hakes RSP, Gorham DJ, Zhou A, Gollner MJ. Review of Pathways for Building Fire Spread in the Wildland Urban Interface Part I: Exposure Conditions. Fire Technol. 2017;53: 429–473.
  35. 35. Rabajczyk A, Zielecka M, Popielarczyk T, Sowa T. Nanotechnology in fire protection—application and requirements. Materials (Basel). 2021;14. pmid:34947443
  36. 36. Nader G, Henkin Z, Smith E, Ingram R, Narvaez N. Planned Herbivory in the Management of Wildfire Fuels: Grazing is most effective at treating smaller diameter live fuels that can greatly impact the rate of spread of a fire along with the same height. Rangelands. 2007; 18–24. Available: http://www.srmjournals.org/doi/abs/10.2111/1551-501X(2007)29[18:PHITMO]2.0.CO;2
  37. 37. Taylor P, Robinson JM, Robinson JM. International Journal of Remote Sensing Fire from space: Global fire evaluation using infrared remote sensing. Int J Remote Sens. 2007;12: 37–41.
  38. 38. Cheong P, Member S, Chang K, Lai Y, Ho S, Sou I, et al. A ZigBee-Based Wireless Sensor Network Node for Ultraviolet Detection of Flame. IEEE Trans Ind Electron. 2011;58: 5271–5277.
  39. 39. Avila-avendano C, Pintor-monroy MI, Ortiz-conde A, Member S, Caraveo-frescas JA, Quevedo-lopez MA. Deep UV Sensors Enabling Solar-Blind Flame Detectors for Large-Area Applications. IEEE Sens J. 2021;21: 14815–14821.
  40. 40. Settersten TB, Farrow RL, Gray JA. Infrared-ultraviolet double-resonance spectroscopy of OH in a flame. Chem Phys Lett. 2003;369: 584–590.
  41. 41. Ballester J, García-Armingol T. Diagnostic techniques for the monitoring and control of practical flames. Prog Energy Combust Sci. 2010;36: 375–411.
  42. 42. Wang CY, Bochkovskiy A, Liao HYM. Scaled-yolov4: Scaling cross stage partial network. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2021; 13024–13033.
  43. 43. Redmon J, Farhadi A. YOLO9000: Better, faster, stronger. Proc - 30th IEEE Conf Comput Vis Pattern Recognition, CVPR 2017. 2017;2017-Janua: 6517–6525. https://doi.org/10.1109/CVPR.2017.690
  44. 44. Wang C-Y, Bochkovskiy A, Liao H-YM. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. 2022; 1–15. Available: http://arxiv.org/abs/2207.02696
  45. 45. Zhao L, Zhi L, Zhao C, Zheng W. Fire-YOLO: A Small Target Object Detection Method for Fire Inspection. Sustain. 2022;14: 1–14.
  46. 46. Lestari DP, Kosasih R, Handhika T, Sari I, Fahrurozi A. Fire Hotspots Detection System on CCTV Videos Using You Only Look Once (YOLO) Method and Tiny YOLO Model for High Buildings Evacuation. 2019 2nd Int Conf Comput Informatics Eng. 2019; 87–92.
  47. 47. Goyal S, Kaur A, Vohra H, Singh A. A YOLO based Technique for Early Forest Fire Detection. 2020.
  48. 48. Xiao Y, Chang A, Wang Y, Huang Y, Yu J, Huo L. Real-time Object Detection for Substation Security Early-warning with Deep Neural Network based on YOLO-V5. 2022 IEEE IAS Glob Conf Emerg Technol GlobConET 2022. 2022; 45–50.
  49. 49. Avazov K, Jamil MK, Muminov B, Abdusalomov AB, Cho YI. Fire Detection and Notification Method in Ship Areas Using Deep Learning and Computer Vision Approaches. Sensors. 2023;23. pmid:37631614
  50. 50. Li Z, Mihaylova L, Yang L. A deep learning framework for autonomous flame detection. Neurocomputing. 2021;448: 205–216.
  51. 51. Wang Y, Han Y, Tang Z, Wang P. A Fast Video Fire Detection of Irregular Burning Feature in Fire-Flame Using in Indoor Fire Sensing Robots. IEEE Trans Instrum Meas. 2022;71: 1–14.
  52. 52. Nishimura K, Cho H, Bise R. Semi-supervised Cell Detection in Time-Lapse Images Using Temporal Consistency. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics). 2021;12908 LNCS: 373–383.
  53. 53. Xiao C, Deng R, Li B, Lee T, Edwards B, Yi J, et al. AdvIT: Adversarial frames identifier based on temporal consistency in videos. Proc IEEE Int Conf Comput Vis. 2019;2019-Octob: 3967–3976. https://doi.org/10.1109/ICCV.2019.00407
  54. 54. Jeong J, Lee S, Kim J, Kwak N. Consistency-based Semi-supervised Learning for Object Detection. 2019.
  55. 55. Yu L, Wang H, Han Q, Niu X, Yiu SM, Fang J, et al. Exposing frame deletion by detecting abrupt changes in video streams. Neurocomputing. 2016;205: 84–91.
  56. 56. Gong Y, Yu X, Ding Y, Peng X, Zhao J, Han Z. Effective fusion factor in FPN for tiny object detection. Proc—2021 IEEE Winter Conf Appl Comput Vision, WACV 2021. 2021; 1159–1167. https://doi.org/10.1109/WACV48630.2021.00120
  57. 57. Wang W, Xie E, Song X, Zang Y, Wang W, Lu T, et al. Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network National Key Lab for Novel Software Technology, Nanjing University University of Electronic Science and Technology of China The University of Adelaide. Iccv2019. 2019; 8440–8449.
  58. 58. Park J, Yoo S, Park J, Kim HJ. Deformable Graph Convolutional Networks. Proc 36th AAAI Conf Artif Intell AAAI 2022. 2022;36: 7949–7956. https://doi.org/10.1609/aaai.v36i7.20765
  59. 59. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2016;2016-Decem: 770–778. https://doi.org/10.1109/CVPR.2016.90
  60. 60. Domestic fire and smoke dataset. 2020. Available: https://github.com/datacluster-labs/Domestic-Fire-and-Smoke-Dataset
  61. 61. MIVIA fire detection dataset. Available: https://mivia.unisa.it/datasets/video-analysis-datasets/fire-detection-dataset/
  62. 62. FireNET. 2019. Available: https://github.com/OlafenwaMoses/FireNET
  63. 63. Alireza S, Fatemeh A, Abolfazl R, Liming Z, Peter F, Erik B. The FLAME DATASET: Aerial imagery pile burn detectionusing drones (UAVS). 2020. Available: https://dx.doi.org/10.21227/qad6-r683
  64. 64. Wang Z, Jin L, Wang S, Xu H. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system. Postharvest Biol Technol. 2022;185: 111808.