Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

SLMFNet: Enhancing land cover classification of remote sensing images through selective attentions and multi-level feature fusion

  • Xin Li,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Visualization, Writing – original draft

    Affiliation College of Computer and Information, Hohai University, Nanjing, Jiangsu, China

  • Hejing Zhao,

    Roles Data curation, Formal analysis, Resources

    Affiliations Water History Department, China Institute of Water Resources and Hydropower Research, Beijing, China, Research Center on Flood and Drought Disaster Reduction of Ministry of Water Resource, China Institute of Water Resources and Hydropower Research, Beijing, China

  • Dan Wu ,

    Roles Conceptualization, Funding acquisition, Methodology, Writing – original draft

    wudan@hky.yrcc.gov.cn

    Affiliations Information Engineering Center, Yellow River Institute of Hydraulic Research, Yellow River Conservancy Commission of the Ministry of Water Resources, Zhengzhou, Henan, China, Key Laboratory of Yellow River Sediment Research, MWR (Ministry of Water Resources), Zhengzhou, Henan, China, Henan Engineering Research Center of Smart Water Conservancy, Yellow River Institute of Hydraulic Research, Zhengzhou, Henan, China

  • Qixing Liu,

    Roles Data curation, Resources, Visualization

    Affiliations Information Engineering Center, Yellow River Institute of Hydraulic Research, Yellow River Conservancy Commission of the Ministry of Water Resources, Zhengzhou, Henan, China, Key Laboratory of Yellow River Sediment Research, MWR (Ministry of Water Resources), Zhengzhou, Henan, China, Henan Engineering Research Center of Smart Water Conservancy, Yellow River Institute of Hydraulic Research, Zhengzhou, Henan, China

  • Rui Tang,

    Roles Data curation, Software

    Affiliation Department of Orthopedics, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China

  • Linyang Li,

    Roles Conceptualization, Resources, Writing – original draft

    Affiliation School of Geodesy and Geomatics, Wuhan University, Wuhan, Hubei, China

  • Zhennan Xu,

    Roles Data curation, Resources, Software, Validation, Writing – original draft

    Affiliation College of Computer and Information, Hohai University, Nanjing, Jiangsu, China

  • Xin Lyu

    Roles Conceptualization, Funding acquisition, Methodology, Resources, Software, Writing – original draft

    Affiliation College of Computer and Information, Hohai University, Nanjing, Jiangsu, China

Abstract

Land cover classification (LCC) is of paramount importance for assessing environmental changes in remote sensing images (RSIs) as it involves assigning categorical labels to ground objects. The growing availability of multi-source RSIs presents an opportunity for intelligent LCC through semantic segmentation, offering a comprehensive understanding of ground objects. Nonetheless, the heterogeneous appearances of terrains and objects contribute to significant intra-class variance and inter-class similarity at various scales, adding complexity to this task. In response, we introduce SLMFNet, an innovative encoder-decoder segmentation network that adeptly addresses this challenge. To mitigate the sparse and imbalanced distribution of RSIs, we incorporate selective attention modules (SAMs) aimed at enhancing the distinguishability of learned representations by integrating contextual affinities within spatial and channel domains through a compact number of matrix operations. Precisely, the selective position attention module (SPAM) employs spatial pyramid pooling (SPP) to resample feature anchors and compute contextual affinities. In tandem, the selective channel attention module (SCAM) concentrates on capturing channel-wise affinity. Initially, feature maps are aggregated into fewer channels, followed by the generation of pairwise channel attention maps between the aggregated channels and all channels. To harness fine-grained details across multiple scales, we introduce a multi-level feature fusion decoder with data-dependent upsampling (MLFD) to meticulously recover and merge feature maps at diverse scales using a trainable projection matrix. Empirical results on the ISPRS Potsdam and DeepGlobe datasets underscore the superior performance of SLMFNet compared to various state-of-the-art methods. Ablation studies affirm the efficacy and precision of SAMs in the proposed model.

1 Introduction

Land cover classification (LCC) in remote sensing images (RSIs) is a fundamental task that quantitatively identifies various object types, including grassland, roads, buildings, water, and forests [1, 2]. The advent of deep learning and semantic segmentation has enabled pixel-level label generation for RSIs in an end-to-end automatic manner, meeting the demand for accurately interpreted images in practical applications like water resource management [35], precision agriculture [68], and hazard assessment [9, 10]. However, accurate surface environment evaluation through segmentation methods remains a challenge due to easily confused objects and discrete elements.

Semantic segmentation inherently involves three sub-problems: object recognition, localization, and boundary delineation [1113]. Effectively addressing all these sub-tasks is essential for creating a robust network. Traditional approaches often rely on expert-designed feature extractors that struggle to adapt to complex and diverse scenarios [14]. In contrast, data-driven technologies such as deep convolutional neural networks (DCNNs) have made significant advancements [1518]. Various DCNN-based models have gained prominence in remote sensing image (RSI) classification tasks [19, 20], with semantic segmentation playing a vital role in achieving detailed understanding.

The fully convolutional network (FCN) marked a breakthrough in computer vision by replacing fully connected layers with convolution layers, allowing an end-to-end trainable neural network for dense pixel predictions [21]. To address transformation loss during training and inference in FCNs, the encoder-decoder architecture, known as SegNet [22], was introduced to gradually recover the spatial dimensions of feature maps. Furthermore, U-Net, initially successful in medical images, was extended to remote sensing images (RSIs). FCNs, SegNet, and U-Net [23] have provided a solid foundation for subsequent research. However, their widespread adoption has been hindered by an inherent limitation—their structural rigidity, which restricts the utilization of informative and discriminative contexts.

We conduct a retrospective analysis of subsequently designed networks and classify them into two categories. The first category involves expanding the receptive field through dilated convolutions. For example, the DeepLab series [2426] emerged by adjusting dilation rates to encompass more neighboring pixels in the convolution units. However, this approach leads to blurred edges where a large number of error-prone pixels are grouped together, negatively impacting the overall segmentation accuracy.

In contrast, attention mechanisms (AMs) have been developed as an alternative solution to address the diverse and complex variety of scenes. AMs aggregate more informative context, making various contextual dependencies quantifiable and obtainable. By injecting attentive features, pixels in both salient and unremarkable regions are better predicted. For instance, SENet [27] was designed to recalibrate channel-wise feature weights by modeling channel interdependencies, while NLNet [28] classifies video objects by capturing self-attentive correlations. Additionally, DANet [29] was explored by designing and integrating position and channel attention modules. However, these methods have limitations in terms of memory usage and processing time, making them significant concerns.

  1. RSIs encompass a diverse array of complex ground objects, resulting in a sparse and imbalanced distribution. This high intra-class variation and inter-class similarity pose challenges for conventional networks. While attention mechanisms enhance the network’s capabilities by capturing and injecting pairwise correlations of pixels and channels, they also introduce memory usage and processing time overhead. Additionally, treating all paired pixels and channels equally may introduce noise with unnecessary dependencies, hindering feature optimization. Therefore, there is a need to selectively leverage the pairwise correlations of pixels and channels to enhance representations effectively.
  2. Furthermore, feature aggregation during decoding is crucial for preserving fine-grained details and structural information. Conventional decoders often use bilinear upsampling to recover spatial dimensions, resulting in transformation loss. Moreover, insufficient fusion of multi-level representations suppresses favorable feature maps for dense prediction. Hence, the lossless aggregation of multi-level features in the decoder warrants further exploration.

To address the mentioned challenges, we propose two novel and efficient attention modules that selectively learn contextual affinities, resulting in attention maps with reduced computational overhead. The first module, named the Selective Position Attention Module (SPAM), leverages spatial pyramid pooling (SPP) [30] to resample feature anchors. The second module, the Selective Channel Attention Module (SCAM), aggregates channel-wise features and calculates affinities between each channel and the aggregated ones. Additionally, we introduce a multi-level feature fusion decoder that employs data-dependent upsampling (DUpsampling) [31] to recover and fuse features without loss. Our contributions can be summarized as follows:

  1. We propose SPAM, which selectively learns spatial context. Specifically, SPAM calculates contextual affinities using SPP to extract multi-scale feature anchors, reducing complexity while enhancing distinguishability, particularly for sparse and imbalanced RSIs.
  2. We propose a SCAM to selectively learning channel-wise contexts. More concretely, SCAM aggregates channel-wise contextual information by reducing feature maps to fewer channels, capturing channel-attentive dependencies without information loss.
  3. We present the Multi-Level Feature Fusion Decoder (MLFD) for the aggregation of multi-level feature maps. MLFD utilizes DUpsampling to increase spatial dimensions with a learnable projection matrix, minimizing transformation loss while preserving spatial details.
  4. We construct SLMFNet, incorporating the above modules, and evaluate its performance on two datasets: ISPRS Potsdam and DeepGlobe [32], which involve aerial and satellite images. We conduct both quantitative and qualitative comparisons and validate the efficiency and effectiveness of MLFD through an ablation study.

The remainder of this paper is organized as follows: Section 2 introduces related works. Section 3 presents the proposed framework and the pipeline of sub-modules in detail. Section 4 offers a quantitative and qualitative evaluation of SLMFNet. Finally, our conclusions are presented in Section 5.

2 Related works

2.1 Land cover classification by semantic segmentation of RSIs

Motivated by significant advances and the proliferation of updated CNN-based architectures [3335], numerous transferable models have been applied to remote sensing images (RSIs) with heuristic enhancements. Each pixel in an RSI carries semantic information, encompassing various elements of the landscape and prominent objects such as rivers, farmland, vehicles, buildings, and more. Additionally, RSIs are captured from high altitudes, presenting two major challenges that a robust RSI segmentation network must confront. Several attempts have been made to enhance segmentation accuracy in RSIs.

Commonly employed post-processing techniques, such as boundary smoothing [36], have been utilized. These methods consider implicit spatial correlations through the gradual merging of neighboring segments, leading to more precise border labeling. Other approaches utilize pre-generated superpixels [37] or connected conditional random fields (CRFs) [38] to refine segments. Color-infrared images are intrinsically used as pairwise potentials for distance evaluation between pixels.

In addition to post-processing, end-to-end trainable networks hold practical significance. Numerous methodologies have been proposed to enrich contextual information in learned representations. For example, HRNet [39], introduced by [40], replaces the original encoder to generate high-resolution feature maps, demonstrating superiority on ISPRS benchmarks. Similarly, [41] proposed a self-cascaded network using dilated convolutions to learn multi-scale representations in the final layer of the encoder. ResUNet-a [42] combines multiple useful modules and strategies, integrating hierarchically contextual dependencies with skip connections. However, the multi-scale feature extraction and fusion strategy in these approaches still offer room for improvement, and the computational complexity has been criticized.

Recent research has explored attention mechanisms (AMs) to creatively model long-range dependencies using matrix computations [43]. The computed attention map can be flexibly integrated into raw features, enhancing the distinguishability of easily confused objects.

Empirically, AMs function as selectors that enable the network to focus on essential words, regions, or relationships. The original application of AMs was in machine translation tasks [44], where they established global dependencies between input and output features. In the context of semantic segmentation tasks, AMs perform a similar role in emphasizing significant components. A fundamental development in this domain is SENet [27], which highlights channels of feature maps with higher weights after learning global pooling features. Extending the channel-wise SE operator to the spatial domain, CBAM [45] underscores meaningful sub-regions. On the other hand, the non-local neural network (NLNet) [28] concurrently learns multiple dimensions’ contexts for visual tasks. The self-attention concept also inspired OCNet [46], which extracts object-level contextual information to aid in object recognition. CCNet [47] recurrently stacks two crisscross AMs. Another notable attention-based network, DANet [29], was introduced in 2019. This approach enables AMs to adaptively couple position-wise dependencies and emphasizes intra-dependencies between channels. The parallel outputs are then summed up for the decoder stage. Although DANet remarkably captures and incorporates informative context, it introduces significant computational complexity.

Recently, many semantic segmentation networks for RSIs with AMs have been developed [48, 49]. For instance, Teerapong et al. [50] employed a channel attention block to enhance RSI segmentation accuracy and constructed a transferable learning model. Additionally, three representative methods are elaborated. The first one is SCAttNet [51], designed to learn an attention map to adaptively aggregate contextual information for each point in RSIs. Alongside local context analysis, LANet [52] bridges the gap between high-level and low-level features, and the designed patch AMs optimize the representations compatibly. Similarly, [53] introduced a hybrid multiple attention network for aerial image semantic segmentation, enabling the network to adaptively learn spatial, channel, and class correlations to enhance the distinguishability of learned representations. Concerning the imbalanced distributions of RSIs, Zhou et al. [12] presented a novel dynamic weighting method based on effective sample calculation for semantic segmentation in remote sensing, significantly improving minimal-class accuracy and recall in imbalanced datasets, as demonstrated in diverse applications like forest fire area segmentation and land-cover semantic segmentation using the Landsat8-OLI and LoveDA datasets. Likewise, Li et al. [54] proposed a novel SSCNet that integrates spectral and spatial information using a joint spectral–spatial attention module (JSSA), significantly enhancing semantic segmentation in RSIs, as demonstrated by superior performance on ISPRS Potsdam and LoveDA datasets and validated through comprehensive ablation studies.

In contrast to existing networks, our primary objective is to reduce computational complexity while maintaining segmentation accuracy. Building on the concept of using complementary context, DANet has proven to be a simple yet effective approach. Motivated by the resampling of spatial feature anchors and the aggregation of feature channels, we propose two lightweight AMs to learn contextual affinity and enhance the distinguishability of learned representations. This significantly reduces the number of matrix multiplications while effectively learning contextual information. In the remainder of this section, we introduce and analyze the DANet pipeline, including the position attention module (PAM) and channel attention module (CAM).

2.2 Revisiting dual attention modules

DANet [29] considers various scales of stuff and objects, occlusions and illuminations as direct interferences that limit the discriminability. Hence, PAM and CAM are designed to refine representations by enriching contextual information.

Fig 1 presents pipeline of PAM. Given the input feature , three convolved features are generated and keep the par same size with the input. In this top branch, the feature is then reshaped and transposed, producing , where N = H × W denotes the number of pixels. And is also flatterned. By the matrix multiplication followed by a softmax layer, the position attention map is calculated, (1) where Ap(i, j) measures the inter-impacts between pixels. Essentially, the correlations contribute more to representations if the similarity is higher. Then the last two branches are used to produce position-wise attention injected features. Formally, (2) where μ is a learnable coefficient which is initialized as 0.

CAM has a semblable multiple-branching structure. The input feature is initially feed into the calculation of attention map as shown in Fig 2. The reshaped and transposed is left multiplied with , where N = H × W. Therefore, the channel attention map is formed as, (3) where accumulates the channel-wise inter-impacts. Then the output feature maps can be refined by, (4) where γ is a scale parameter. The element-wise summation of channel attentive features and raw input features is defined as the enhanced ones with the injection of channel-wise relationships.

In summary, DANet preferably captures and leverages the spatial and channel inter-dependencies that consider all the positions and channels. Although the representations are refined significantly, a mass of computations supervenes for calculating attention map. Inherently, not all the positions or channels are indispensable for boosting the network’s discernibility. Moreover, the sparsity and imbalanced distribution of RSI are exceptionally salient, making DANet inefficient.

3 The proposed method

3.1 Framework

As illustrated in Fig 3, SLMFNet strives to refine learnt representations for RSI semantic segmentation. Although attention mechanisms have revealed a strong capability in modeling contextual information in spatial and channel domains, the redundant computations are supervened due to the stereotype of equally handling coupled pixels and channels. Then, our investigation shows that resampling informative and representative feature anchors and channels enable the AMs to collect satisfactory contextual cues. Therefore, SPAM and SCAM are designed and embedded at the end of feature encoder. Subsequently, MLFD is formed to recover the spatial resolution losslessly, ensuring reconstructed representations undistorted.

3.2 Selective position attention module

As discussed above, the original PAM models the dependencies of all pairwise pixels. Moreover, the correlations are implemented by the vectorial internal product, which occupies huge GPU memory and costs much more time with massive pixels. Thus, to our analysis, not all pairwise relationships are meaningful in contributing to the completeness and uniqueness of a specific pixel’s representation. However, concerning the sparsity and variety of the covered objects in RSI, resampling the representative positions as the feature anchors, which are then utilized to model position-wise spatial attention map, paves an artful way to inject sufficient spatial dependencies for feature optimization. Thus, the pipeline is designed in Fig 4.

Concretely, given the input feature map , where C, H, W indicate the number of channels, height and width. Similarly, three parallel branches are presented, in which the middle branch is the key. After a 1 × 1 convolution layer, SPP at three scales is applied. The elements in pooled features are viewed as feature anchors, also known as the gathering centers. Then the pooled features are concatenated and flattened to vectors , where C′ denotes the channel value of rebuilt feature maps, and L is the sum of feature anchors in all size of pooled features. Therefore, two fully connected layers are used to build two independent representations of feature anchors with and . Commonly, the position attention map is obtained by matrix multiplication and softmax layer, (5) where measures the correlation between the jth pixel and ith anchors, and , marked as the top branch in Fig 4, is reshaped and transposed features after convolved with kernel size of 1 × 1. It is identical that F1 keeps the same channels to P1. Afterward, the attention map Ap(i, j) is transposed to . Thereby, P3 is formed as below, (6) where denotes the representation of feature anchors, and derives from attention map. Finally, the reshaped is element-wise summed with input feature maps F to output refined representations Fp. And the calculation can be expressed as follow, (7) where μ is a learnable coefficient (initially set to 0.5). is the refined representations of SPAM.

To evaluate the complexity, we quantify the magnitude of computational complexity. As for PAM, the computational complexity is determined as (8) where H, W and C′ refer to the input feature’s spatial size, N = H × W represents the flattened tensor dimension. By contrast, SPAM only levelled down the complexity to o(CNL), where LN. Taking four branches for spatial pyramid pooling with kernel size of 1 × 1, 2 × 2, 3 × 3, and 6 × 6, fifty spatial anchors are resampled. Given the input H × W = 256 × 256, the complexity ratio between PAM and SPAM is, (9)

Consequently, there are about 1311 times matrix multiplications are saved by SPAM.

Apart from the original PAM, SPAM first resamples feature anchors, gathering neighboring pixels by spatial pyramid pooling at various scales. Subsequently, the position attention map is calculated between every pixel and feature anchors instead of pairwise pixels. Also, the dependencies along with spatial domain are injected into raw representations.

3.3 Selective channel attention module

As for CAM, the dependencies stem from channel-wise correlations. Practically, the existing CAMs calculate the correlations between all channels, neglecting the informativeness of specific channel. Meanwhile, the computational burden is brought in. As illustrated in Fig 5, SCAM is devised with initially shrinking the number of feature channels. In our opinion, the aggregated channels are representative and informative are capable of providing satisfactory feature clues. Then, the channel-wise dependencies are collected between each channel and the shrunk ones.

Given an input feature termed as , where C, H, W indicate the number of channels, height and width. Considering the middle branch, a 1 × 1 convolution layer is applied to F at first. Then reshaping convolved features to generate and , where S indicates the shrunk number of channels and SC. Therefore, the selective channel attention map is computed between and (the reshaped and transposed raw feature). Formally, (10) where Fs1(i) denotes the i’th representative channel of Fs1, C1(j) denotes the j’th channel of raw feature C1. The Ac(i, j) quantifies the channel-pair impacts and denotes the selective channel attention map. In essence, the boosted channel-wise representation has been injected the correlation with resampling representative and informative channels. After capturing channel attention map, the output is naturally formed as, (11) where γ is a learnable coefficient (initially set to 0), and is the output of SCAM.

To evaluate the complexity, we quantify the magnitude of computational complexity by calculating the matrix multiplication times. Definitely, the shrunk channels S is far from original ones C. The ratio of complexity comparison is perceived as, (12)

As analyzed above, reducing the number of channels results in a reduction in the number of matrix multiplications, thereby allowing the network to convey its intended information more efficiently.

Note that the convolution layers are not embedded initially due to maintaining the channel-wise relationships. Following the theoretical derivation, SCAM allows fewer computations to capture sufficient channel-wise dependencies. Then an immediate yet practicable skip connection is deployed to fuse the modeled channel correlations.

3.4 Multi-level feature fusion with data-dependent upsampling

On the question of recovering learnt features to original spatial size, a sophisticated fusion strategy is urgently needed for aggregating multi-level features, positively exerting the segmentation results. Two features sources, the encoded one from encoder phases and the decoded one from decoder phases, contribute to enhancing distinguishability and preserving geo-objects’ details in gradually expanding procedure. However, the commonly used bilinear upsampling inevitably causes information loss. Hence, we propose a MLFD, replacing bilinear upsampling with DUpsampling. Besides, the encoded features are fused with decoded ones via skip connections.

As shown in Fig 6, the flows are presented. The input attentive feature maps derive from the output of SAMs which marked in the red dotted box in Fig 3. Here, this input is denoted as Fd(i), where i represents the order of encoder block. In our network, i is positive integer and pre-defined as 5. And we re-define the original spatial size of image is H × W (channels are irrespective). Then, the ratio of different scales of features to original spatial size can be formed as, (13) where represents the spatial size of Fd(i), and H × W is the spatial size of raw image. Therefore, a fusion process of neighboring features can be represented as, (14) where is the fused feature of two neighboring phases, ⊕ denotes the element-wise summation, and fd(⋅) represents the data-dependent upsampling. For the sake of understanding, the necessary explanation of DU is presented. [31] revealed that the ground truth mask preserves enough mutual dependent structural information and can be compressed with arbitrary ratio losslessly. As a consequence, they developed DU based on the rationale of matrix projection. Explicitly, a learnable transformation matrix is devised as the projection coefficient for changing spatial size. And the matrix is unceasingly tuning by minimizing the following loss function, (15) where Mc is the transformation matrix, gi−1 represents the compressed ground truth mask with the same spatial size of Fd(i − 1), and Mc(Fd(i)) means data-dependent upsampling of Fd(i).

Finally, the gradually recovered feature maps, also known as fused feature maps in Fig 6, reach the spatial size of H × W. Specifically, the DUpsampling procedure is seamlessly embedded into the decoder as a 1 × 1 convolution, without heavy computations.

4 Experiments and discussions

4.1 Datasets

To evaluate the proposed method, we conduct the experiments on two benchmarks that derive from different platforms with different properties.

4.1.1 ISPRS Potsdam benchmark dataset.

The ISPRS Potsdam benchmark dataset is acquired from aerial platform, with a coverage of Potsdam, Germany. The associated ground truth masks are labeled with six land cover categories: impervious surfaces, buildings, low vegetation, trees, cars and clutter. There are 24 images available with a spatial size of 6000 × 6000 pixels. The spatial resolution is about 5 cm. One sample is illustrated in Fig 7.

4.1.2 DeepGlobe land cover classification dataset.

The DeepGlobe Land Cover Classification Dataset offers sub-meter satellite imagery concerning three cities, Vegas, Potsdam, and Paris. Apart from aerial imagery, the satellite imagery suffers more from diversity and variety of land cover stuff. There are 1146 images with a spatial size of 2448 × 2448 pixels and spatial resolution of 0.5 m. The paired masks are annotated with seven categories: urban land, agriculture land, rangeland, forest land, water, barren land and unknown. One sample is illustrated in Fig 8.

4.2 Implement details

The proposed SLMFNet is implemented under PyTorch framework. All the models were implemented using a single Nvidia A40 GPU with 40GB memory under Linux OS for experiments. Table 1 lists the hyper-parameters. The backbone of the encoder is ResNet 101, which is marked in the green dotted box in Fig 3. In this study, we did not employ pretraining on additional data sources for the ResNet 101. For both Potsdam and DeepGlobe datasets, the settings are the same.

The available data is split into sub-patch with a spatial size of 256 × 256. The data properties and partitions are then presented in Table 2. The data partitions satisfy a ratio of 8:1:1. And the same data augmentation strategies are carried.

At last, the comparative methods are listed in Table 3. FCN, SegNet, U-Net and DeepLab V3+ are the pioneering segmentation networks. Then the attention-based networks are involved, including NLNet and OCNet. For RSI segmentation, ResUNet-a, SCAttNet, HMANet and LANet are reproduced. Besides, DANet, the prototype of our SLMFNet, is compared.

4.3 Numerical metrics

Pixel-wise intersection over union (mIoU) and overall accuracy (OA) are opted as the evaluation metrics. The formulas are as follows, (16) (17) where TP, FP, FN and TN represent the number of true positives, the number of true positives, the number of false negatives, and the number of true negatives, respectively.

4.4 Compared with State-of-the-art methods

4.4.1 Results of Potsdam dataset.

Since contextual information and multi-level feature fusion are important, SLMFNet is proposed to flexibly capture spatial and channel context and losslessly fuse multi-level features. This section collects the accuracy of the ISPRS Potsdam test set, experimentally evaluating SLMFNet.

Table 4 reports the results of the Potsdam test set, including class-wise and overall performance. First, the class-wise IoU and pixel accuracy are counted. Then the mIoU and OA are calculated. In general, the results evident that SLMFNet outperforms comparative models by a remarkable margin. As discussed previously, SAMs boost efficiency while retaining robust in providing context-enhanced features. Moreover, the MLFD decoder is devised to improve the learnt representations for inference during decoding. Therefore, a considerable amount of mIoU and OA is obtained by SLMFNet, scoring more than 65% and 89%.

thumbnail
Table 4. The results of the Potsdam test set with class-wise performance in form of IoU/OA and overall performance with mIoU and OA, where bold indicates the best.

https://doi.org/10.1371/journal.pone.0301134.t004

DeepLab V3+ demonstrates a notable improvement, achieving an increase of approximately 3% in OA compared to U-Net by enlarging the local receptive field. This initial enhancement suggests the benefits of leveraging contextual information. Subsequently, attention-based models further advance this concept by enabling the network to capture long-range and short-range contextual information comprehensively. NLNet, as the pioneering self-attention model in computer vision, showcases a remarkable advancement, outperforming DeepLab V3+ by nearly 6% in OA. This substantial improvement highlights the effectiveness of self-attention mechanisms.

Following this trend, OCNet takes a step further by incorporating contextual object dependencies in addition to pixel-wise correlations, resulting in a modest accuracy boost. Similarly, DANet achieves accuracy levels on par with NLNet and OCNet, signifying its competitiveness. However, it is essential to note that specialized RSI-specific networks exhibit significant potential when considering the intricate and diverse properties of RSIs. This inherent complexity contributes to the outstanding accuracy achieved by models such as ResUNet-a, SCAttNet, HMANet, and LANet.

Moreover, variants of attention mechanisms, including SCAttNet, HMANet, and LANet, demonstrate superiority over the hybrid ResUNet-a. Specifically, HMANet stands out, delivering a performance increase of over 4% through the integration of multiple attention mechanisms. SLMFNet also leverages attention mechanisms, enabling the attentive learning of spatial and channel dependencies. This trait not only ensures performance at least on par with SCAttNet and HMANet but also enriches contextual cues. Furthermore, with the incorporation of MLFD, SLMFNet surpasses the 90% and 91% accuracy thresholds for the recognition of easy-to-distinguish and frequently occurring pixels.

Among all categories, SLMFNet achieves the highest accuracy in five out of six scenarios. For the segmentation of challenging objects like low vegetation and trees, SCAttNet and HMANet achieve accuracies of 89% and over 89%, respectively. In contrast, thanks to the fusion of multi-level features, SLMFNet consistently surpasses 90% and 91% accuracy levels for these categories. Similar trends are observed for objects like cars, which are often characterized by indistinct boundaries.

In addition to numerical indicators, visualization is another way to evaluate the performance appreciably. Fig 9 presents two random samples from the Potsdam test se. Closer inspection of the Figure shows that SLMFNet segments the raw image in the highest consistency with the ground truth mask. Whatever the pixel belongs to stuff or objects, the multi-attentive representations and the multi-level feature fusion make the pixel classified indeed. As a result, the dense predictions are overall superior. As clearly presented, conventional models, such as FCN-8s. SegNet, and U-Net, are susceptible to be interfered with by the intra-class variants and inter-class similarities. For example, some road parts are labeled as buildings, and the edges between low-vegetation and tress are blurry. SLMFNet consolidates the discriminative and distinguishable representations by losslessly fusing multi-level features. Eventually, the uniformity and coherence of our results are the best.

thumbnail
Fig 9. Visual inspections of random samples from the Potsdam test set.

https://doi.org/10.1371/journal.pone.0301134.g009

In summary, SLMFNet takes an entire account of the imaging properties and visual characteristics of aerial imagery. Experimental results provide compelling evidence that selectively learning contextual affinity then losslessly fusing multi-level features can significantly boost segmentation performance.

4.4.2 Results of DeepGlobe dataset.

Unlike aerial images, satellite imagery has a lower spatial resolution, covers a broader observation range, contains more diverse stuff and entities, and has more complex imaging conditions. Hence, it is more challenging to produce pixel-level semantic masks of satellite imagery. Aiming at assessing the generalizability and stability, another experiments are conducted on the DeepGlobe dataset.

The results are reported in Table 5 manifests an analogous tendency. Though the data properties are changed, the visual essence of stuff and objects are consistent. Predominantly, SLMFNet attains the highest scores except for class-wise accuracy of forest land. This minor fluctuation is legitimate due to the random initialization of the network. Generally speaking, attention-based networks perform better than fundamental ones. And RSI-specific models provide finer labels, especially the models that design and integrate AMs.

thumbnail
Table 5. The results of the DeepGlobe test set with class-wise performance in form of IoU/OA and overall performance with mIoU and OA, where bold indicates the best.

https://doi.org/10.1371/journal.pone.0301134.t005

As reported by the measurable indicators, all the RSI-specific networks preferably actualize dense predictions with more than 50% and 80% of mIoU and OA, while fundamental networks drop below 50% and 76%. The land cover types covered by satellite sensors are comparatively coarse in spatial details, which is a flagrant contrast to aerial images. Therefore, the utilization of informative context is essential for enhancing the geo-objects’ details, making the pixels easily identifiable.

Aside from numerical indicators, the visualizations in Fig 10 explicitly compare the models. Forestland comprises densely vegetated areas with diverse tree species. It encompasses regions with abundant vegetation, including forests, woodlands, and jungles. Notable features encompass variations in vegetation color, texture, and density. Barrenland denotes regions with minimal or absent vegetation, typically in arid, desert, or rocky terrains. Prominent features encompass the absence of visible vegetation, prevalence of sandy or rocky textures, and a uniform appearance. Bodies of water, including lakes, rivers, ponds, and oceans, are characterized by their distinct blue coloration and the presence of ripples or reflections, depending on surface conditions. Due to the category diversity, network-learned features should adapt to various ground objects. Simultaneously acquiring spatial and channel contextual affinities will enhance the distinguishability of representations. The raw images tell us that the objects and stuff are visually ambiguous. Therefore, the difficulty is relatively hard in segmenting. For water areas, striking differences are made by different models. RSI-specific methods act well in delineation, among which the incorporation of AMs is profitable. Together with the multi-level feature fusion, SLMFNet can segment stuff and objects with high certainty.

thumbnail
Fig 10. Visual inspections of random samples from the DeepGlobe test set.

https://doi.org/10.1371/journal.pone.0301134.g010

In conclusion, the numerical metrics and visual inspections authenticate that SLMFNet succeeds in segmenting satellite imagery. SLMFNet enables the network to surmount the salient intra-class variants and inter-class similarities with the informative context and multi-level representations.

4.5 Effects of selective attention modules

SAMs are devised to boost the efficiency in capturing spatial and channel dependencies. To make a fair comparison, the MLFD is removed. Instead, we adopt the standard symmetrical decoder, bringing into correspondence with DANet. And we note this version as SLMFNet v1. Intuitively, SLMFNet v1 employs the same decoder with DANet. The unique difference is the AM. The following sub-sections will discuss the variation trends of efficiency and accuracy.

4.5.1 Accuracy.

We first collect the mIoU and OA on the test set. As shown in Table 6, SLMFNet v1 performs better on the DeepGlobe dataset while slightly worse on Potsdam than DANet. To our best of knowledge, the Potsdam dataset consists of aerial imagery, which is visually more compact than DeepGlobe. Therefore, resampling spatial anchors cause certain losses, making the produced contextual affinity incomplete. Consequently, SLMFNet v1 drops the OA and mIoU with about 0.13% and 0.9%, practically ignorable. In contrast, the DeepGlobe dataset is particularly sparse. Therefore, the resampled anchors are sufficient for capturing and injecting arbitrary-range correlations without inducing irrelevant information. In response, SLMFNet v1 increases the mIoU and OA with about 3.3% and 2.6%.

thumbnail
Table 6. Accuracy comparisons in form of mIoU/OA on test set.

https://doi.org/10.1371/journal.pone.0301134.t006

Secondly, we monitor the training loss and training mIoU. Figs 11 and 12 plot the change of training loss and mIoU. Visibly, SLMFNet v1 is in line with DANet. As for training loss, at the 500 epoch, DANet drops the loss to 0.0332 whilst SLMFNet v1 decreases to 0.0386. The mIoU of the training set also provides the same characteristics. SLMFNet v1 eventually reaches 95.55%, while DANet is slightly higher with 95.84%. Unquestionably, the results are not significantly degraded. Figs 13 and 14 draw the tendency of DeepGlobe. Due to the highly sparse distribution, the selective AMs avoid importing irrelevant contextual information. As can be seen, lower loss and higher mIoU are brought by SLMFNet v1 than DANet. The mIoU experiences a rise from 90.80% to 94.66%.

Thus, we conclude that selectively learning contextual affinity hardly causes performance degradation. Specifically, for the RSI with high sparsity, SAMs work better by capturing and injecting dependable and closely associated affinity.

4.5.2 Efficiency.

In this part, the time cost is compared to evaluate the efficiency. In this research, we utilized a sole Nvidia A40 GPU boasting 40GB of memory for the implementation of all models. The training time per epoch and test time per sub-patch with a spatial size of 256 × 256 are collected in Table 7. The training time is an average of 500 epochs and the test time of a single image is an average of the whole test set.

By reducing the matrix manipulations, SAMs help the network execute less training and test time. For Potsdam dataset, SLMFNet v1 averagely costs 203s per epoch, while DANet needs about 262s. When inferring a single image with the same spatial size, SLMFNet v1 reduces about 4.4 ms.

The time cost on DeepGlobe is comparatively higher because of the large volume of data. However, compared to DANet, the time cost per epoch cuts down by more than 200 seconds on average. Concerning inference of single image, SLMFNet v1 boosts with more than 5 ms.

Furthermore, we perform a comparison of parameter siize and FLOPs. As depicted in Table 8, the SAM design has resulted in a reduction of approximately 24% in parameters. In terms of FLOPs, SLMFNet v1 also demonstrates competitive performance when compared to DANet. The reduction of redundant computations leads to a significant improvement concerning parameter size and FLOPs.

Overall, the experimental results thoroughly corroborate the superiority of SAMs in boosting efficiency.

4.6 Effects of multi-level feature fusion with data-dependent upsampling

In this subsection, two counterparts are designed to validate the effects of MLFD. The first one is one-step bilinear upsampling with a ratio of 16 (corresponding to Fd(i)), termed as SLMFNet with OneUP. Secondly, we replace DUpsampling by bilinear upsampling while keeping the pipeline of Fig 6, termed as SLMFNet with MultiUP.

As reported in Table 9, the comparative analysis of the SLMFNet model’s performance with its OneUP and MultiUP variants on the Potsdam and DeepGlobe datasets reveals critical insights into the efficacy of semantic segmentation techniques. The standard SLMFNet consistently outperforms its variants, indicating an inherent robustness in its architectural design. This superiority is particularly pronounced in the mIoU scores, a metric that signifies the model’s precision in classifying individual pixels into correct semantic categories. For instance, on the Potsdam dataset, the standard SLMFNet achieves a mIoU of 65.53%, compared to 61.38% and 63.60% for its OneUP and MultiUP counterparts, respectively. This trend is similarly observed in the DeepGlobe dataset, where the standard SLMFNet attains a mIoU of 58.75%, significantly higher than the other two variants. The OA metric follows a similar pattern, reinforcing the model’s general effectiveness across diverse geographical and environmental contexts.

thumbnail
Table 9. Results of different decoders in form of mIoU/OA.

https://doi.org/10.1371/journal.pone.0301134.t009

The results from the Potsdam and DeepGlobe datasets highlight the standard SLMFNet’s adaptability and efficiency in dealing with varying complexities inherent in different types of aerial and satellite imagery. The noticeable improvement in both mIoU and OA by the standard SLMFNet suggests that its internal mechanisms are better suited for capturing and differentiating between the nuanced features of diverse land cover categories. This is particularly crucial in remote sensing applications where accuracy in pixel classification directly impacts the practical utility of the segmentation results. The less pronounced performance of the OneUP and MultiUP variants could be attributed to possible limitations in their upscaling or feature integration processes, which may not capture the full spectrum of spatial and contextual information as effectively as the standard SLMFNet. These findings underscore the importance of a model’s internal architecture in determining its segmentation proficiency, especially in complex and varied environments such as those represented by the Potsdam and DeepGlobe datasets. Further research could delve into dissecting the specific architectural elements of the SLMFNet that contribute to its enhanced performance, offering valuable insights for future advancements in the field of semantic segmentation.

5 Conclusions

Semantic segmentation is vital for intelligent interpretation of RSIs in land cover classification. Acquiring contextual information is essential for obtaining distinctive representations. Previous research demonstrates the effectiveness of AMs in capturing contextual dependencies across domains. However, these attention-based models often treat both pixels and channels equally, leading to the introduction of irrelevant and intrusive associations. This is primarily due to the sparse distribution, high intra-class diversity, and inter-class similarity in RSIs, which substantially differ from natural images, ultimately increasing computational complexity.

To address these challenges, we introduce SLMFNet, a novel segmentation network that efficiently refines learned representations and seamlessly fuses multilevel features. We first employ SAMs to learn contextual affinities across spatial and channel domains with fewer matrix operations. Subsequently, a multilevel feature fusion decoder with learnable DUpsampling is designed to gradually merge and recover densely predicted feature maps. SLMFNet is subjected to extensive comparative analysis on two benchmarks, showcasing its competitive and compelling performance in both quantitative and qualitative assessments.

In conclusion, the advancements presented in this study have significant implications for practical applications in a variety of real-world scenarios. The enhanced accuracy and efficiency in semantic segmentation of remote sensing images offered by our approach can be instrumental in fields such as urban planning, environmental monitoring, disaster response, and agricultural management. For instance, in urban development, our method can aid in precise mapping and analysis of land use, supporting sustainable urban planning decisions. In the context of environmental monitoring, it can contribute to more accurate assessments of land cover changes, aiding in the conservation of natural resources. Additionally, in disaster management, the ability to rapidly and accurately segment images can be crucial in assessing damage and guiding rescue operations. In agricultural settings, our approach can assist in monitoring crop health and land conditions, promoting efficient and sustainable agricultural practices. These applications underscore the potential of our research to not only advance the academic understanding of remote sensing image processing but also to contribute tangible benefits to society by informing policy and decision-making processes. By bridging the gap between theoretical research and practical implementation, we aim to pave the way for future innovations that harness the full potential of semantic segmentation in addressing real-world challenges.

Nonetheless, three aspects necessitate further exploration and enhancement. Firstly, the manually set SPP ratios and compressed channel numbers can be adjusted by the network based on learned data, prompting additional research. Secondly, as SLMFNet demands ample well-annotated data for training, the creation of a cost-effective semi-supervised variant holds promise. Moreover, the availability of multi-modal RSIs could be considered to achieve accurate LCC, especially exploring models for few-shot learning paradigms. Looking ahead, we anticipate the development of an effective and accurate land cover classification approach with data adaptability.

References

  1. 1. Wang H., Liu Y., Wang Y., Yao Y., and Wang C., “Land cover change in global drylands: A review,” Science of The Total Environment, vol. 863, p. 160943, 2023.
  2. 2. Wang J., Bretz M., Dewan M. A. A., and Delavar M. A., “Machine learning in modelling land-use and land cover-change (lulcc): Current status, challenges and prospects,” Science of the Total Environment, vol. 822, p. 153559, 2022.
  3. 3. Duan L. and Hu X., “Multiscale refinement network for water-body segmentation in high-resolution satellite imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 17, no. 4, pp. 686–690, 2019.
  4. 4. Yuan K., Zhuang X., Schaefer G., Feng J., Guan L., and Fang H., “Deep-learning-based multispectral satellite image segmentation for water body detection,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 7422–7434, 2021.
  5. 5. Zhang S., Yang P., Xia J., Wang W., Cai W., Chen N., et al. “Land use/land cover prediction and analysis of the middle reaches of the yangtze river under different scenarios,” Science of The Total Environment, vol. 833, p. 155238, 2022.
  6. 6. You J., Liu W., and Lee J., “A dnn-based semantic segmentation for detecting weed and crop,” Computers and Electronics in Agriculture, vol. 178, p. 105750, 2020.
  7. 7. Fathololoumi S., Firozjaei M. K., Li H., and A. Biswas, “Surface biophysical features fusion in remote sensing for improving land crop/cover classification accuracy,” Science of The Total Environment, vol. 838, p. 156520, 2022.
  8. 8. Bressan P. O., Junior J. M., Correa Martins J. A., de Melo M. J., Gonçalves D. N., Freitas D. M., et al. “Semantic segmentation with labeling uncertainty and class imbalance applied to vegetation mapping,” International Journal of Applied Earth Observation and Geoinformation, vol. 108, p. 102690, 2022.
  9. 9. Pi Y., Nath N. D., and Behzadan A. H., “Detection and semantic segmentation of disaster damage in uav footage,” Journal of Computing in Civil Engineering, vol. 35, no. 2, p. 04020063, 2021.
  10. 10. Du B., Zhao Z., Hu X., Wu G., Han L., Sun L., et al. “Landslide susceptibility prediction based on image semantic segmentation,” Computers & Geosciences, vol. 155, p. 104860, 2021.
  11. 11. Ding H., Jiang X., Shuai B., Liu A. Q., and Wang G., “Semantic segmentation with context encoding and multi-path decoding,” IEEE Transactions on Image Processing, vol. 29, pp. 3520–3533, 2020.
  12. 12. Zhou Z., Zheng C., Liu X., Tian Y., Chen X., Chen X., et al. (2023). A Dynamic Effective Class Balanced Approach for Remote Sensing Imagery Semantic Segmentation of Imbalanced Data. Remote Sensing, 15(7), 1768.
  13. 13. Osco L. P., Marcato Junior J., Marques Ramos A. P., de Castro Jorge L. A., Fatholahi S. N., de Andrade Silva J., et al. “A review on deep learning in uav remote sensing,” International Journal of Applied Earth Observation and Geoinformation, vol. 102, p. 102456, 2021.
  14. 14. Zhang L., Zhang L., and Du B., “Deep learning for remote sensing data: A technical tutorial on the state of the art,” IEEE Geoscience and remote sensing magazine, vol. 4, no. 2, pp. 22–40, 2016.
  15. 15. Krizhevsky A., Sutskever I., and Hinton G. E., “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
  16. 16. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd International Conference on Learning Representations, ICLR 2015, 2015.
  17. 17. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al. “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, USA, 2015, pp. 1–9.
  18. 18. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  19. 19. Gao H., Yang Y., Li C., Gao L., and Zhang B., “Multiscale residual network with mixed depthwise convolution for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 4, pp. 3396–3408, 2021.
  20. 20. Gao H., Chen Z., and Xu F., “Adaptive spectral-spatial feature fusion network for hyperspectral image classification using limited training samples,” International Journal of Applied Earth Observation and Geoinformation, vol. 107, p. 102687, 2022.
  21. 21. Shelhamer E., Long J., and Darrell T., “Fully convolutional networks for semantic segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 4, pp. 640–651, 2016.
  22. 22. Badrinarayanan V., Kendall A., and Cipolla R., “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
  23. 23. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
  24. 24. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, 2015.
  25. 25. Chen L.-C., Papandreou G., Kokkinos I., Murphy K., and Yuille A. L., “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017.
  26. 26. L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 801–818.
  27. 27. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  28. 28. X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794–7803.
  29. 29. J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for scene segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3146–3154.
  30. 30. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881–2890.
  31. 31. Z. Tian, T. He, C. Shen, and Y. Yan, “Decoders matter for semantic segmentation: Data-dependent decoding enables flexible feature aggregation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3126–3135.
  32. 32. Demir, I., Koperski, Krzysztof and Lindenbaum, David and Pang, Guan Demir, I., Koperski, K., et al. (2018). Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 172–181.
  33. 33. Li Z., Liu H., Zhang Z., Liu T., and Xiong N. N., “Learning knowledge graph embedding with heterogeneous relation attention networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 8, pp. 3961–3973, 2022.
  34. 34. Liu H., Wang X., Zhang W., Zhang Z., and Li Y.-F., “Infrared head pose estimation with multi-scales feature fusion on the irhp database for human attention recognition,” Neurocomputing, vol. 411, pp. 510–520, 2020.
  35. 35. Liu T., Yang B., Liu H., Ju J., Tang J., Subramanian S., et al. “Gmdl: Toward precise head pose estimation via gaussian mixed distribution learning for students’ attention understanding,” Infrared Physics and Technology, vol. 122, p. 104099, 2022.
  36. 36. Kemker R., Salvaggio C., and Kanan C., “Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning,” ISPRS journal of photogrammetry and remote sensing, vol. 145, pp. 60–77, 2018.
  37. 37. Mi L. and Chen Z., “Superpixel-enhanced deep neural forest for remote sensing image semantic segmentation,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 159, pp. 140–152, 2020.
  38. 38. Pan X., Gao L., Zhang B., Yang F., and Liao W., “High-resolution aerial imagery semantic labeling with dense pyramid network,” Sensors, vol. 18, no. 11, p. 3774, 2018.
  39. 39. Zhang J., Lin S., Ding L., and Bruzzone L., “Multi-scale context aggregation for semantic segmentation of remote sensing images,” Remote Sensing, vol. 12, no. 4, p. 701, 2020.
  40. 40. Wang J., Sun K., Cheng T., Jiang B., Deng C., Zhao Y., et al. “Deep high-resolution representation learning for visual recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 10, pp. 3349–3364, 2021.
  41. 41. Liu Y., Fan B., Wang L., Bai J., Xiang S., and Pan C., “Semantic labeling in very high resolution images via a self-cascaded convolutional neural network,” ISPRS journal of photogrammetry and remote sensing, vol. 145, pp. 78–95, 2018.
  42. 42. Diakogiannis F. I., Waldner F., Caccetta P., and Wu C., “Resunet-a: A deep learning framework for semantic segmentation of remotely sensed data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 162, pp. 94–114, 2020.
  43. 43. Borji A. and Itti L., “State-of-the-art in visual attention modeling,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 1, pp. 185–207, 2013.
  44. 44. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A. N., et al. “Attention is all you need,” Advances in neural information processing systems, vol. 30, pp. 5998–6008, 2017.
  45. 45. S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3–19.
  46. 46. Yuan Y., Huang L., Guo J., Zhang C., Chen X., and Wang J., “Ocnet: Object context for semantic segmentation,” International Journal of Computer Vision, vol. 129, no. 8, pp. 2375–2398, 2021.
  47. 47. Z. Huang, X. Wang, Y. Wei, L. Huang, H. Shi, W. Liu, et al. “Ccnet: Criss-cross attention for semantic segmentation,” IEEE transactions on pattern analysis and machine intelligence, pp. 1–1, 2020.
  48. 48. Li X., Xu F., Xia R., Lyu X., Gao H., and Tong Y., “Hybridizing cross-level contextual and attentive representations for remote sensing imagery semantic segmentation,” Remote Sensing, vol. 13, no. 15, 2021.
  49. 49. Li X., Li T., Chen Z., Zhang K., and Xia R., “Attentively learning edge distributions for semantic segmentation of remote sensing imagery,” Remote Sensing, vol. 14, no. 1, 2022.
  50. 50. Panboonyuen T., Jitkajornwanich K., Lawawirojwong S., Srestasathiern P., and Vateekul P., “Semantic segmentation on remotely sensed images using an enhanced global convolutional network with channel attention and domain specific transfer learning,” Remote Sensing, vol. 11, no. 1, p. 83, 2019.
  51. 51. Li H., Qiu K., Chen L., Mei X., Hong L., and Tao C., “Scattnet: Semantic segmentation network with spatial and channel attention mechanism for high-resolution remote sensing images,” IEEE Geoscience and Remote Sensing Letters, vol. 18, no. 5, pp. 905–909, 2021.
  52. 52. Ding L., Tang H., and Bruzzone L., “Lanet: Local attention embedding to improve the semantic segmentation of remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 1, pp. 426–435, 2021.
  53. 53. Niu R., Sun X., Tian Y., Diao W., Chen K., and Fu K., “Hybrid multiple attention network for semantic segmentation in aerial images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, p. 3065112, 2022.
  54. 54. Li X., Xu F., Yong X., Chen D., Xia R., Ye B., et al. (2023). SSCNet: A Spectrum-Space Collaborative Network for Semantic Segmentation of Remote Sensing Images. Remote Sensing, 15(23), 5610.