## Figures

## Abstract

Shadow removal is an important issue in the field of motion object surveillance and automatic control. Although many works are concentrated on this issue, the diverse and similar motion patterns between shadows and objects still severely affect the removal performance. Constrained by the computational efficiency in real-time monitoring, the pixel feature based methods are still the main shadow removal methods in practice. Following this idea, this paper proposes a novel and simple shadow removal method based on a differential correction calculation between the pixel values of Red, Green and Blue channels. Specifically, considering the fact that shadows are formed because of the occlusion of light by objects, all the reflected light will be attenuated. Hence there will be a similar weakening trends in all Red, Green and Blue channels of the shadow areas, but not in the object areas. These trends can be caught by differential correction calculation and distinguish the shadow areas from object areas. Based on this feature, our shadow removal method is designed. Experiment results verify that, compared with other state-of-the-art shadow removal methods, our method improves the average of object and shadow detection accuracies by at least 10% in most of the cases.

**Citation: **Liu S, Chen M, Li Z, Liu J, He M (2023) A differential correction based shadow removal method for real-time monitoring. PLoS ONE 18(2):
e0276284.
https://doi.org/10.1371/journal.pone.0276284

**Editor: **Zhaoqing Pan,
Nanjing University of Information Science and Technology, CHINA

**Received: **June 19, 2022; **Accepted: **October 3, 2022; **Published: ** February 7, 2023

**Copyright: ** © 2023 Liu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

**Data Availability: **All the codes and results presented in the study are available from https://github.com/ljx43031/DC-SR-method. Further, this URL can also be found in the last of The structure of DC-SR method Subsection of this manuscript to support our results.

**Funding: **This work was supported by NSFC under Grant Number 62061003 and Guangxi Science and Technology Plan Project under Grant Numbers AD19245047, in part by Guangxi University of Science and Technology Doctoral Fund under Grant Number XKB19Z02, in part by Liuzhou Science and Technology Plan Project under Grant Numbers 2021ADB0102. All funding was granted to JL. Moreover, this work was also supported in part by Natural Science Foundation of Sichuan Province under Grant Number 2022NSFSC1885. The funder of this project played the role of funding and comparison method provision in this work.

**Competing interests: ** The authors have declared that no competing interests exist.

## Introduction

With the application of intelligent video surveillance and automatic control, shadow removal becomes more and more important. An effective shadow removal method can minimize the interference of shadows on object detection, recognition and control [1, 2]. In fact, shadow, as a phenomenon due to light being blocked by object, has the same motion property with object itself. Therefore, it is difficult to identify and remove shadows based on the judgment of motion property of image areas in the video. Meanwhile, considering the computational efficiency in real-time application and the cost of real-time monitoring equipment, deep neural network based methods [3–7] are difficult to be widely used. Hence, shadow removal is still an interesting and challenge work for real-time monitoring.

To satisfy the need for real-time computing in object surveillance, background difference based methods [8, 9] are still the most cost-effective methods in practical application. After background difference, all the areas of objects with shadows are detected. Then, the shadows can be further removed based on one or more features which can well distinguish shadows from objects. Currently, the best feature is the RGB pixel ratio [10]. As illustrated in [10], RGB pixel ratios of shadow areas before and after shadow covering are similar, and are significantly different from the ones of object areas. Hence, the shadow areas can be found by RGB pixel ratio comparison (RGB-PRC) method. However, according to the principle of shadow formation, it is the occlusion of light by objects that forms the shadow. In fact, it cannot be sure that the same areas in the images with and without shadows follow the same ratios of RGB pixel values, especially when the light is severely occluded. This phenomenon will be discussed in Section 3. To conquer this problem, this paper investigates all the image data in ISTD dataset [3, 4], and discovers a new pixel feature in the shadow areas of images according to a differential correction calculation. It is named RGB pixel differential correction (RGBP-DC) feature. Further, a new differential correction based shadow removal (DC-SR) is proposed according to the aforementioned RGBP-DC feature. The experiment results show that our DC-SR outperforms the state-of-the-art shadow removal methods.

The organizations of the remainder of this paper are structured as follows. In Related Work Section, the related works are described. In RGB Pixel Differential Correction Feature Section, the limitation of RGB-PRC method is illustrated, and a new RGBP-DC feature is proposed. Then, a new DC-SR method is designed based on the RGBP-DC feature in Differential Correction Based Shadow Removal Method Section. In Experiments Section, a lot of comparison experiments are performed to verify the effectiveness of the proposed DC-SR method. Finally, the conclusion and limitation of the proposed method are given in Conclusion Section. The main contributions of our work are summarized as follows:

- A new pixel feature, i.e., RGBP-DC feature, is found in the shadow areas of images.
- A new differential correction based shadow removal (DC-SR) method is proposed.

## Related work

Currently, in the field of real-time monitoring, we still need the background difference to quickly find out the areas of objects. After that, the goal of shadow removal becomes to distinguish the shadow areas from the object areas based on two kinds of methods: the model-based and feature-based methods, respectively.

Model-based methods mainly use prior information to train corresponding models. For example, Zhang proposed a robust vehicle detection method with shadow elimination [11]. Amin Benish proposed a shadow mask extractor by using a three color attenuation model (TAM) and intensity information to segment the shadow area [12]. Saritha Murali proposed a method to remove shadows from images with uniform textures models [13]. However, those model-based methods depend on the determination of prior information, and also need a lot of training. Hence, the generalization ability of those methods are limited.

Different from model-based methods, feature-based methods mainly concentrate on distinguishing and removing the shadow by contour, brightness, color, texture and other features of pixel which are less affected by environmental factors. Hence, those methods have a wide range of application. For example, Xu obtained the stable shadow elimination results through HSV color features, by using the difference idea of image Log domain [14]. Park used shadow depth map and illumination invariance feature to remove shadows [15]. Li proposed a shadow weakening algorithm based on brightness and texture features without the prior training and manual intervention [16]. Salvador proposed a new cast shadow segmentation algorithm based on the shadow spectrum and geometric characteristics of shadows in the scene [17]. All the performance of shadow removal improved by using one or more features of images, but the computational cost is too expensive to satisfy the real time surveillance for motion object. Tang proposed a low computational cost algorithm to remove shadow according to the differences in foreground and background of the composition of pixel gray feature [18]. Chen further proposed a state-of-the-art shadow removal method: RGB pixel ratio comparison (RGB-PRC) method, based on the similar pixel change features. In this method, the shadow can be distinguished and removed directly according to the ratios of pixels between Red, Green and Blue (RGB) channels in the foreground and background [10]. Therefore, the effect of shadow removal can be greatly improved.

In this paper, we also concentrate on distinguishing and removing the shadow by pixel features. Different from the aforementioned features, the proposed pixel feature is obtained according to both the principle of shadow formation and the statistics of a large number of actual scenes. Hence, the feature proposed in this paper is more typical and has wider applicability. All of the above will be discussed in the next sections.

## RGB pixel differential correction feature

In this section, RGBP-DC feature is discussed, in comparison with the RGB pixel ratio (RGBP-R) feature proposed in [10]. Generally speaking, given a real point , let and represent the illuminance reflected from this point with and without the direct light exposures, respectively. In other words, represents the illuminance reflected from when it is in shadow. Assuming that the coordinate of the corresponding point in imaging plane is (*x*, *y*), the pixel values of this point in R, G and B channels are denoted as *R*(*x*, *y*), *G*(*x*, *y*) and *B*(*x*, *y*), respectively, and the ones in shadow are denoted as *R*_{s}(*x*, *y*), *G*_{s}(*x*, *y*) and *B*_{s}(*x*, *y*), respectively. According to [10], the aforementioned RGB pixel ratios *Ratio*_{R}(⋅), *Ratio*_{G}(⋅) and *Ratio*_{B}(⋅) are defined as:
(1a)
(1b)
(1c)
where *RGB*(*x*, *y*) ≜ {*R*(*x*, *y*), *G*(*x*, *y*), *B*(*x*, *y*)}.

**Observation 1** *Under the premise that the RGB pixel values are linearly related to the illuminance reflected from*
, *the performance of RGB-PRC method can be guaranteed*.

**Analysis 1** *Under this premise in Observation 1, the pixel values of*
*in R, G and B channels can be simply calculated as follows* [17]:
(2a)
(2b)
(2c)
(3a)
(3b)
(3c) *where S*_{R}(*x*, *y*), *S*_{G}(*x*, *y*) *and S*_{B}(*x*, *y*) *are the linear photoelectric conversion coefficients in R, G and B channels, respectively. Obviously, under this premise of linearity, the ratios of pixel values in R, G and B channels are the same. That is*,
(4a)
(4b)
(4c) *where RGB*_{s}(*x*, *y*) ≜ {*R*_{s}(*x*, *y*), *G*_{s}(*x*, *y*), *B*_{s}(*x*, *y*)}. *Hence, based on this feature that the RGB pixel ratios with and without the direct light exposure are equal, the shadow area can be distinguish from the object and removed as other background*.

*This completes the analysis of Observation 1*.

However, for most image sensors, the aforementioned linear relationship for imaging is only valid in a certain light intensity range [19, 20]. If light intensity is out of this range, for example the light is severely occluded in strong sunlight, the linear relationship for imaging cannot be guaranteed.

**Observation 2** *In the nonlinear range, the RGBP-R feature no longer exists*.

**Analysis 2** *The nonlinear relationships between RGB pixel values and illuminance reflected from*
*are assumed to be*:
(5a)
(5b)
(5c) *where s*_{r}(⋅), *s*_{g}(⋅) *and s*_{b}(⋅) *are the nonlinear photoelectric conversion function in R, G and B channels, respectively. To simplify the analysis*, Eq (5) *is linearized based on the Taylor expansion as the following*:
(6a)
(6b)
(6c) *where* . *The reason why* *is selected to expand the nonlinear functions is that* *is a variable much smaller than* , *and the linearized results in* (6) *can be very closed to the original ones*:
(7a)
(7b)
(7c)

*Similarly, when*
*is in the shadow, the RGB pixel values can be calculated as follows*:
(8a)
(8b)
(8c) *Further*, Eq (8) *can be linearized based on the Taylor expansion and approximated as the following*:
(9a)
(9b)
(9c) *For common sensors, the output is 0 when input is 0. Hence*, Eq (9) *can be simplified as follows*:
(10a)
(10b)
(10c)

*Then, according to* (7), *the Ratio*_{R}(⋅) *with direct light from* *can be approximately calculated as* (11) *where* , . *Moreover, according to* (10), *the Ratio*_{R}(⋅) *without direct light from* *can be approximately calculated as* (12) *where* . *Obviously, in most of the cases, Ratio*_{R}(*RGB*_{s}(*x*, *y*)) ≠ *Ratio*_{R}(*RGB*(*x*, *y*)). *In other words, in the R channel, the pixel ratios with and without the direct light exposure are commonly unequal. These issues are also be found in the G and B channels. Hence, in the nonlinear range, the RGBP-R feature no longer exists*.

*This completes the analysis of Observation 2*.

In order to eliminate shadows more effectively and robustly, this paper mines a new image feature, i.e., the RGBP-DC feature, to adapt to most shadow removal situations.

**Observation 3** *The differences between pixel values of point*
*with and without the direct light exposure in R, G and B channels are defined as* Δ*R*(*x*, *y*), Δ*G*(*x*, *y*) *and* Δ*B*(*x*, *y*), *respectively. Under a stable monitoring scenario, there are stable linear relationships on* Δ*R*(*x*, *y*), Δ*G*(*x*, *y*) *and* Δ*B*(*x*, *y*).

**Analysis 3** *Under the stable monitoring scenario, the reduced illuminance*
*caused by the occlusion of light by different objects in different space are similar. Hence*, *can be approximately replaced by a constant value C*.

*In* Eq (7), *Hence*, Eq (7) *can be further simplified as follows*:
(13a)
(13b)
(13c) *Then, jointing* Eq (10), *the differences of pixel values of points with and without the direct light exposure in different channels are calculated as follows*:
(14a)
(14b)
(14c)

*Because s*_{r}(*C*, (*x*, *y*)), *s*_{g}(*C*, (*x*, *y*)), *s*_{b}(*C*, (*x*, *y*)), , , , , and *are all unknown constants in the stable monitoring scenarios. The stable relationships between* Δ*R*(*x*, *y*), Δ*G*(*x*, *y*) and Δ*B*(*x*, *y*) *can be obtained by eliminating* *as follows*:
(15a)
(15b)
(15c)

Eq (15) *can be further simplified as follows*:
(16a)
(16b)
(16c) *where* (17a)
(17b)
(17c)
(17d)
(17e)
(17f) *Obviously, N*_{B2R}, *M*_{B2R}, *N*_{B2G}, *M*_{B2G}, *N*_{G2R} *and M*_{G2R} *are all unknown constants. Hence, under a stable monitoring scenario, all these aforementioned constants can be calculated by known* Δ*R*(*x*, *y*), Δ*G*(*x*, *y*) *and* Δ*B*(*x*, *y*) *in advance. Then, the stable linear relationships between the pixel differential values in R, G and B channels can be derived*.

*This completes the analysis of Observation 3*.

Therefore, under the linear correction with Eq (16), the differences of Δ*R*(*x*, *y*), Δ*G*(*x*, *y*) and Δ*B*(*x*, *y*) are very small. That is:
(18a)
(18b)
(18c)
Given any small threshold *T*, it is easily to find out that:
(19a)
(19b)
(19c)
This is the RGBP-DC feature, which can be used to discover and remove the shadow areas.

## Differential correction based shadow removal method

In this section, the proposed DC-SR method is described in detail. Firstly, based on ISTD dataset [3, 4], a set of parameters in Eq (18) is determined for surveillance environments under common daylight. Secondly, the structure of shadow removal method is designed and the algorithmic complexity is discussed.

### Parameters estimation according to ISTD dataset

As seen in Eq (16), constants *N*_{B2R}, *M*_{B2R}, *N*_{B2G}, *M*_{B2G}, *N*_{G2R} and *M*_{G2R} can be learnt as the unknown parameters, given known Δ*R*(*x*, *y*), Δ*G*(*x*, *y*) and Δ*B*(*x*, *y*) under actual monitoring scenes.

A major surveillance scene is monitoring during the day or under sunlight lamps. The light source for this monitoring is sunlight. This paper uses the ISTD dataset, in which all of images are taken under sunlight, to estimate those unknown parameters in such scene. Specifically, as seen in Fig 1, there are three kinds of image in each triplet of ISTD dataset: shadow image, shadow mask image and shadow-free image. To obtain a stable relationships between Δ*R*(*x*, *y*), Δ*G*(*x*, *y*) and Δ*B*(*x*, *y*), this paper derives the values of *N*_{B2R}, *M*_{B2R}, *N*_{B2G}, *M*_{B2G}, *N*_{G2R} and *M*_{G2R} based on the statistics of all triplets in this database.

(a) shadow image. (b) shadow mask image. (c) shadow-free image.

First, for the *i*th triplet, the , and , i.e., the means of all differences between the pixel values without and with shadow in shadow area of each R, G and B channel are calculated as:
(20a)
(20b)
(20c)
where *P*_{s} is the pixel set of images of the *i*th triplet, ** p** is the pixel in

*P*_{s},

*N*is the number of pixels in

*P*_{s}.

*R*

_{sf}(

*i*,

**),**

*p**G*

_{sf}(

*i*,

**),**

*p**B*

_{sf}(

*i*,

**),**

*p**R*

_{s}(

*i*,

**),**

*p**G*

_{s}(

*i*,

**) and**

*p**B*

_{s}(

*i*,

**) are the values of**

*p***in R, G and B channels of shadow-free image and shadow image, respectively.**

*p**M*(

*i*,

**) is the value of**

*p***in shadow mask image.**

*p*To simplify the expression of means of differences, this paper uses , and as common notations for the means of differences of any triplet. As seen in Eq (16), there are linear relationships between , and . Obviously, , and will also obey linear relationships, further obey the RGBP-DC feature. To simplify subsequent calculations, this paper analyses the relationships of , and to obtain the RGBP-DC feature, instead of the ones of , and .

The relationship of , and of all the triplet in ISTD dataset is summarized in Fig 2. In this figure, each red points in three sub-figures are the pairs of (, ), (, ) and (, ) of the *i* triplet. Obviously, the linear relationships can be fitted as the blue lines in Fig 2, and their function is shown as follows,
(21a)
(21b)
(21c)
Hence, according Eq (21), the RGBP-DC feature for common daylight monitoring is derived.

### The structure of DC-SR method

Generally speaking, the goal of shadow removal for monitoring is to eliminate the effect of shadows on object recognition. When objects are detected by cameras, the shadows are also detected as part of the objects, thus seriously affecting the accuracy of detection. Because there is no RGBP-DC feature in the actual object areas of foreground, the shadows areas can be found and distinguished from object areas by RGBP-DC feature.

Specifically, based on the RGBP-DC feature, the structure of the proposed DC-SR method is described in Fig 3. As we can see in this figure, a background image *I*_{b} need to be firstly determined before monitoring. Then, given a foreground image *I*_{f}, the absolute value of the first difference of the complete image Δ*I*_{c} is calculated as follows,
(22)

Obviously, ∣Δ*I*_{c}∣ includes R, G and B channels, i.e., ∣Δ*R*_{c}∣, ∣Δ*G*_{c}∣ and ∣Δ*B*_{c}∣. In addition, the grayscales of foreground and background are calculated, and the mask of objects with shadow in foreground image, which is denoted as *I*_{m}, is derived by thresholding the absolute differences of grayscales. Then, the first order differences of objects with shadow in R, G and B channels are calculated as follows,
(23a)
(23b)
(23c)

According to the RGBP-DC feature, the second differences are calculated by differential correction (DC) Eq (21) as follows,
(24a)
(24b)
(24c)
A proper constant *T* is set in “Thresholding2” to distinguish the shadow as follows:
(25a)
(25b)
(25c)
Then any pixel in image, which satisfies (25), is considered to be shadow pixel and removed. That is, in “Thresholding2”, all pixels satisfy inequalities (25) are set to be 0, and others are set to be 255. Then the image of object after shadow removal can be derived. The whole calculation process of this method is summarized in Algorithm 1. All the codes and results can be found at: https://github.com/ljx43031/DC-SR-method.

**Algorithm 1** DC-SR method

**Input**: Foreground image *I*_{f}, background image *I*_{b}, the values of *N*_{B2R}, *M*_{B2R}, *N*_{B2G}, *M*_{B2G}, *N*_{G2R} and *M*_{G2R}, threshold *T*.

**Output**: The binary image of objects without shadows.

1. ∣Δ*I*_{c}∣ is calculated as: ∣Δ*I*_{c}∣ = ∣*I*_{b} − *I*_{f}∣.

2. Grayscale *I*_{f} and *I*_{b} to get *I*_{fg} and *I*_{bg}.

3. Thresholding ∣*I*_{fg} − *I*_{bg}∣ → *I*_{m}

4. ∣Δ*I*_{c}[:, :, 2]∣ *I*_{m} → ∣Δ** R**∣

5. ∣Δ*I*_{c}[:, :, 1]∣ *I*_{m} → ∣Δ** G**∣

6. ∣Δ*I*_{c}[:, :, 0]∣ *I*_{m} → ∣Δ** B**∣

7. Δ*R* − [*M*_{B2R} + *N*_{B2R}Δ*B*] → ΔΔ*R*&*B*

8. Δ*G* − [*M*_{B2G} + *N*_{B2G}Δ*B*] → ΔΔ*G*&*B*

9. Δ*R* − [*M*_{G2R} + *N*_{G2R}Δ*G*] → ΔΔ*R*&*G*

10. **For**: pixel *p*(*x*, *y*) in image:

(a) **If**:ΔΔ*R*&*B*(*x*, *y*) < *T* and ΔΔ*G*&*B*(*x*, *y*) < *T* and ΔΔ*R*&*G*(*x*, *y*) < *T*

**The value of** *p*(*x*, *y*) **is set to be 255**

(b) **Else**:

**The value of** *p*(*x*, *y*) **is set to be 0**

**End**

### Time complexity analysis

As described in Algorithm 1, there are 10 steps for each shadow removal calculation. Assuming the image size is *N* × *M* × 3, step 1 contains *N* × *M* × 3 subtractions and absolute value calculations, hence the time complexity is **O**(*N* × *M* × 6). Step 2 needs to grayscale *I*_{f} and *I*_{b}, which in fact averages the pixel values *I*_{f} and *I*_{b}. Hence, this calculation performs two additions and one division for each pixel, and the time complexity is ** O**(

*N*×

*M*× 6). Step 3 contains

*N*×

*M*subtractions and thresholding calculations, hence the time complexity is

**(**

*O**N*×

*M*× 2). Moreover, it can be easily known that the time complexity is

**(**

*O**N*×

*M*× 3) from step 4 to 6, and

**(**

*O**N*×

*M*× 9) from step 7 to 9. Step 10 is the judgements for each pixel, whose time complexity is

**(**

*O**N*×

*M*× 3). Obviously, the total time complexity of this algorithm is

**(**

*O**N*×

*M*× 29). In other words, the time complexity of this method is linearly related to the number of pixels of the video frame.

## Experiments

In this section, our DC-SR method is evaluated in both outdoor and indoor (with sunlight lamp) environments, in comparison with RGB-PRC method and the Gray Levels Comparison (GLC) method [18]. Further, we test our DC-SR method under real-time monitoring, in order to prove its reliability and stability. For a fair comparison, the thresholds used in all the aforementioned methods are fixed. Specifically, the threshold used in our DC-SR method is set to be 8. According to [10, 18], the thresholds used in RGB-PRC and GLC method are set to be 0.008 and 35, respectively.

### The implementation description

In this paper, we use Hikvision 2 megapixel USB camera to take photos and use ordinary computer to run the program of the proposed method.

### Analysis of shadow removal performance in static scene

#### Outdoor environment.

We compare the shadow removal performances of the aforementioned three methods in outdoor environment. The results are shown in Fig 4. In this figure, the first column shows the background image, the second column shows the foreground image, the third column shows the foreground image with object circled in red lines, the forth to sixth columns show the shadow removal results of the GLC, RGB-PRC and DC-SR methods, respectively. In this bright outdoor environment, the backlit sides of the objects are very dark, which are very similar as the shadows in terms of the intensity of light reflection. Hence, those methods, such as the GLC method, which distinguish shadows relying on the intensity of light reflections will fail. This problem can be obviously seen in the forth column of Fig 4. Moreover, as mentioned in RGB Pixel Differential Correction Feature Section, the RGBP-R feature is not accurate enough to distinguish the shadow area from the object area in those bright light environments. Hence, as seen in the fifth column of Fig 4, the shadow removal performance of RGB-PRC method degrades in those environments. That is, if the object can be completely detected, the shadow cannot be perfectly removed, for example the image in the third row and fifth column. Conversely, if the shadow can be perfectly removed, the object cannot be completely detected, for example the image in the sixth row and fifth column. However, the shadow removal results of the sixth column of Fig 4 show that the proposed DC-SR method can accurately detected the object while well removing the shadow. Hence, our DC-SR method can outperforms other shadow removal methods in outdoor environment with bright light.

(a) Background. (b) Foreground. (c) Object circled in red. (d) GLC. (e) RGB-PRC. (f) DC-SR.

#### Indoor environment.

As seen in Fig 5, both the performance of GLC and RGB-PRC are improved because the light intensity is much weaker than sunlight. But obviously, the proposed DC-SR method still provides the most accurate object detection results with similar shadow removal performances.

(a) Background (b) Foreground (c) Object circled in red. (d) GLC. (e) RGB-PRC. (f) DC-SR.

#### Evaluation metric.

To further verify the shadow removal effect of DC-SR method, we propose the average of object and shadow detection accuracies as the evaluation metric. specifically, the essence of shadow removal is to distinguish the shadows from objects. In other words, the objects need to be correctly detected while well removing the shadows. Hence, we average the object and shadow detection accuracies to obtain a proper overall merit for shadow removal as follows: (26) where and are the numbers of pixels of detected object and shadow areas, respectively. and are the numbers of pixels of actual object and shadow areas, respectively. To use the metric (26), actual object and shadow areas need to be known first. Hence, we manually marked the counters of the object for each case, as seen in the third columns of Figs 4 and 5, to get the actual object area. Further, we eliminate the actual object area from the difference image of foreground and background to obtain the actual shadow area. The comparison results are shown in Table 1. The cases of outdoors 1 to 6 corresponds to each row of Fig 4 and the cases of indoors 1 to 3 corresponds to each row of Fig 5. In Table 1, we can see that, in each case, the proposed DC-SR method improves the average of object and shadow detection accuracies by at least 10% except Indoors 3. But in fact, the average accuracy of proposed method is still higher that other methods in the Indoors 3 case. Hence, our DC-SR method outperforms other state-of-the-art shadow removal methods.

### Performance testing for monitoring

To further verify the performance of our DC-SR method for monitoring, we test our DC-SR method under real-time monitoring. The results are shown in Figs 6 and 7. In the first rows of the two figures, the blue tracking boxes correctly frame the object area without the shadow area at any time steps. Moreover, in the second rows of the two figures, the white areas are the objects detected after shadow removal. Obviously, we can see that no shadow areas are included and the car was correctly detected.

(a) time step 1. (b) time step 2. (c) time step 3.

(a) time step 1. (b) time step 2. (c) time step 3.

Meanwhile, we test the computational time of all the aforementioned methods with the same Intel Core i7-8700 CPU at 3.2 GHz and 32 GB RAM. The results are that the proposed DC-SR method consumes 19.9ms for each removal calculation while the GLC and RGB-PRC methods consume 5.0 and 20.9 ms, respectively. Hence, the computational efficiency of the proposed DC-SR method meets the requirements of monitoring, which can be further verified that no frame drops were found in real-time monitoring experiments. Therefore, our DC-SR method can achieve efficient and accurate shadow removal in real-time monitoring.

## Conclusion

In this paper, we propose a new differential correction based shadow removal (DC-SR) method based on the new RGB pixel differential correction (RGBP-DC) feature in the shadow areas. From the effect of shadow removal, the proposed RGBP-DC feature, which can well distinguish the shadow areas from objects, is more suitable for shadow removal under both daylight and sunlight lamp environments. Experiments proves that our DC-SR method performs better in comparison with the state-of-the-art shadow removal methods of monitoring. Further, the results of time complexity analysis and algorithm testing in real-time monitoring show that our DC-SR method has the ability to efficiently and accurately remove shadows.

In fact, the performance of our DC-SR method is closely related to the parameters in (16). Although those parameters are set based on the ISTD dataset which covers the main daylight environments and represents the most common relationship between shadow and shadow-free images, in some special low-light or polarized environments, the performance of our method will still degrade. How to improve the adaptability of the method to those special environments will be an important future work.

## References

- 1. Yaghoobi Ershadi N, Menéndez JM, Jiménez D. Robust vehicle detection in different weather conditions: Using MIPM. PloS one. 2018;13(3):e0191355. pmid:29513664
- 2. Liu F, Zeng Z, Jiang R. A video-based real-time adaptive vehicle-counting system for urban roads. PloS one. 2017;12(11):e0186098. pmid:29135984
- 3.
Wang J, Li X, Yang J. Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018. p. 1788–1797.
- 4.
Le H, Samaras D. Shadow removal via shadow image decomposition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision; 2019. p. 8578–8587.
- 5.
Fu L, Zhou C, Guo Q, Juefei-Xu F, Yu H, Feng W, et al. Auto-exposure fusion for single-image shadow removal. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2021. p. 10571–10580.
- 6.
Qu L, Tian J, He S, Tang Y, Lau RW. Deshadownet: A multi-context embedding deep network for shadow removal. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017. p. 4067–4075.
- 7.
Cun X, Pun CM, Shi C. Towards ghost-free shadow removal via dual hierarchical aggregation network and shadow matting gan. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 34; 2020. p. 10680–10687.
- 8. Liu R, Ruichek Y, El Bagdouri M. Multispectral background subtraction with deep learning. Journal of Visual Communication and Image Representation. 2021;80:103267.
- 9. Vijayan M, Raguraman P, Mohan R. A Fully Residual Convolutional Neural Network for Background Subtraction. Pattern Recognition Letters. 2021;146:63–69.
- 10. Chen R, Li P, Huang Y. Moving Shadow Removal Algorithm Based on Multi-feature Fusion. Computer Science. 2018;45(6):291–295.
- 11. Zhang J, Guo X, Zhang C, Liu P. A vehicle detection and shadow elimination method based on greyscale information, edge information, and prior knowledge. Computers & Electrical Engineering. 2021;94:107366.
- 12. Amin B, Riaz MM, Ghafoor A. Automatic shadow detection and removal using image matting. Signal Processing. 2020;170:107415.
- 13. Murali S, Govindan V, Kalady S. Shadow removal from uniform-textured images using iterative thresholding of shearlet coefficients. Multimedia Tools and Applications. 2019;78(15):21167–21186.
- 14. Xu H, Hou X, Qin Y. Vehicle Shadow Elimination Method Based on HSV Color Feature. Journal of Xuzhou Institute of Technology (Natural Science Edition). 2016;31(02):5–8.
- 15. Park KH, Lee YS. Simple shadow removal using shadow depth map and illumination-invariant feature. The Journal of Supercomputing. 2021; p. 1–16.
- 16. Li Y, Cao K, Wang J, Mei C. A Vehicle Shadow Elimination Algorithm Based on Principal Component Analysis algorithm. Science Technology and Engineering. 2017;17(28):91–97.
- 17. Salvador E, Cavallaro A, Ebrahimi T. Cast shadow segmentation using invariant color features. Computer vision and image understanding. 2004;95(2):238–259.
- 18. Tang C, Ahmad MO, Wang C. An efficient method of cast shadow removal using multiple features. Signal, Image and Video Processing. 2013;7(4):695–703.
- 19. JIN Xf, YUE Sg, Liu Ly, CHEN M, ZHAO Y, WANG Cf. Research on CMOS image sensor hard reset circuit. ACTA ELECTONICA SINICA. 2014;42(1):182.
- 20. Tian H, Fowler B, Gamal AE. Analysis of temporal noise in CMOS photodiode active pixel sensor. IEEE Journal of Solid-State Circuits. 2001;36(1):92–101.