Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

MTSTR: Multi-task learning for low-resolution scene text recognition via dual attention mechanism and its application in logistics industry

  • Herui Heng ,

    Roles Conceptualization, Methodology, Writing – original draft

    hengherui@stu.shmtu.edu.cn

    Affiliation Institute of Logistics Science and Engineering, Shanghai Maritime University, Shanghai, China

  • Si Li,

    Roles Data curation, Methodology, Project administration, Resources, Writing – review & editing

    Affiliation Institute of Logistics, Yunda Express Co., Ltd, Shanghai, China

  • Peiji Li,

    Roles Formal analysis, Investigation, Resources, Validation, Writing – original draft

    Affiliation Institute of Logistics, Yunda Express Co., Ltd, Shanghai, China

  • Qianfeng Lin,

    Roles Data curation, Investigation, Supervision, Validation

    Affiliation Department of Computer Engineering, Korea Maritime and Ocean University, Busan, South Korea

  • Yufen Chen,

    Roles Conceptualization, Methodology, Resources, Software, Writing – original draft

    Affiliation Institute of Logistics, Yunda Express Co., Ltd, Shanghai, China

  • Lei Zhang

    Roles Data curation, Formal analysis, Investigation, Visualization, Writing – review & editing

    Affiliation School of Automation, Northwestern Polytechnical University, Shanxi, China

Abstract

Recognizing texts in images plays an important role in many applications, such as industrial intelligence, robot vision, automatic driving, command assistance, and scene understanding. Although great progress has been achieved in various fields, research on complex systems modeling using text recognition technology requires further attention. To address this, we propose a new end-to-end multi-task learning method, which includes a super-resolution branch (SRB) and a recognition branch. To effectively learn the semantic information of images, we utilize the feature pyramid network (FPN) to fuse high- and low-level semantic information. The feature map generated by FPN is then delivered separately to the super-resolution branch and the recognition branch. We introduce a novel super-resolution branch, the SRB based on the proposed dual attention mechanism (DAM), designed to enhance the capability of learning low-resolution text features. The DAM incorporates the residual channel attention to enhance channel dependencies and the character attention module to focus on context information. For the recognition branch, the feature map generated by FPN is fed into an RNN sequence module, and an attention-based decoder is constructed to predict the results. To address the issue of low-resolution text recognition in numerous Chinese scenes, we propose the Chinese super-resolution datasets instead of relying on traditional down-sampling techniques to generate training datasets. Experiments demonstrate that the proposed method performs robustly on low-resolution text images and achieves competitive results on benchmark datasets.

Introduction

Complex systems have garnered significant attention, and effectively addressing complex system applications through information theory modeling can greatly enhance industrial intelligence. Logistics information is widely acknowledged as a pivotal factor in parcel transportation and essential for customer communication. China’s e-commerce industry has witnessed substantial growth over the past decade, driven by advancements in Internet technology, which has in turn fueled the expansion of the logistics sector. With the nation generating more than 100 million express parcels daily to meet domestic demand, the logistics industry has become a cornerstone of economic development. However, efficiently managing this immense daily parcel volume presents a significant challenge. Beyond automation technologies, Scene Text Recognition (STR) technology has emerged as a crucial solution. STR not only facilitates intelligent sorting in distribution centers but also enables rapid extraction of customer information during the final stage of distribution. By replacing labor-intensive manual processes, STR offers substantial cost savings for businesses contending with rising labor expenses. In distribution centers, STR technology captures destination codes from express parcels, enabling intelligent routing to centralized areas before final delivery. Although text recognition research has been applied in various domains, such as bank slip recognition, shopping receipt recognition, and passport recognition, limited attention has been directed toward the logistics field. This is because delivery sheet images captured by cameras are often bent, distorted, and low in resolution, as exemplified in Fig 1. As is shown in Fig 1, the text in express sheet image is extremely in low-resolution condition and some Chinese characters cannot be recognized because of the low-resolution captured by the camera.

thumbnail
Fig 1. The cropped delivery sheet images were in low-resolution condition.

We have detected the key information in blue box or red box by text detector.

https://doi.org/10.1371/journal.pone.0294943.g001

Great progress has been made in scene text recognition, but the accuracy of low-resolution text recognition has not reached a satisfactory level due to various influencing factors. Many blurred and low-resolution images exist in natural scenes, and these pose serious challenges to text recognition.

The progress in scene text recognition (STR) has significantly benefited from the remarkable performance of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). Generally, an STR model consists of four main components: rectification, feature extraction, sequence module, and decoder module. Prior research [14] has introduced enhanced models for recognizing curved text images, employing a rectification module to rectify text images and recognition capabilities. Furthermore, many STR approaches incorporate a transformer module or a gated recurrent convolution network [5] to address challenges, especially for curved text. Thin-plate splines (TPS) [6, 7], a variation of Spatial Transformer Networks (STN) [8], exhibit effective rectification performance. Reference [9] conducted a study comparing the performance of VGG [10], RCNN [3], and ResNet [11] as feature extraction modules, revealing that deep models can achieve superior results in STR. Concerning the sequence module, references [7, 12] applied bidirectional Long Short-Term Memory (BiLSTM) to enhance sequence modeling. Baek et al. [13] extended this comparison by adding or removing the BiLSTM module in their proposed method, illustrating that the inclusion of BiLSTM enhances text sequence recognition. Finally, the decoder module utilizes Connectionist Temporal Classification [12] as the decoding mechanism to predict the character sequence.

Recent studies [1416] have made progress in text recognition performance on various benchmark datasets, but they have not explored the domain of low-resolution text recognition. Previous single image super-resolution approaches [1720] were developed to generate high-resolution features; however, they were trained on low-resolution datasets created through simple down-sampling, which is unsuitable for real low-resolution text recognition. Furthermore, logistics delivery sheets containing numerous low-resolution Chinese characters, exhibit various situations including blurring, missing strokes, sticking strokes, and motion blur. To address these challenges, we propose the Chinese Text Super-Resolution Dataset (CTSD), a paired dataset consisting of low-resolution and high-resolution samples specifically designed for the super-resolution task. Our literature review indicates that there have been no previously proposed Chinese paired super-resolution datasets, which we introduce in Chinese super-resolution dataset explanation section.

In our study, we present a novel text recognition method and introduce an efficient super-resolution branch aimed at continuous improvement of recognition results. Our key contributions are as follows: (1) We create a new Chinese super-resolution dataset that includes commonly used Chinese characters, numbers, and English letters. The dataset comprises Chinese articles and desensitized Chinese address texts. (2) We propose a multi-task learning approach for scene text recognition, which includes a text recognition branch and a super-resolution branch. The proposed super-resolution branch, incorporating residual super-resolution units, effectively captures rich information from low-resolution features. It is utilized to generate super-resolution text features by contrastive learning between high-resolution and low-resolution features. (3) We design a dual attention mechanism as a significant component of our super-resolution branch (SRB). We introduce the residual channel attention (RCA) to comprehensively capture channel-wise dependencies and propose a character attention module to capture contextual information between pixels while preserving essential information for global understanding. (4) To model the inter-character relationships within the text sequence, we construct an attention-based decoder. Experimental results on low-resolution benchmark datasets demonstrate the superiority of our proposed method over recent efficient text recognizers, establishing it as an effective solution for recognizing low-resolution text images in complex logistics scenes.

Related work

From a problem-solving perspective, previous related works have focused on two directions, one is the text recognition for curved text images in natural scenes, and the other is text recognition for low-resolution text images.

Text recognition methods for curved text images

Traditional optical character text recognizers cannot meet the complex recognition requirements, dynamically changing text scenes have spurred the development of numerous STR methods. Researchers have turned to deep learning models to devise innovative solutions. In an effort to enhance the recognition of curved text images, Shi et al. [7] introduced an automatic rectification module to combat curved text. They proposed a spatial transformation network to convert images into a horizontal and more legible format, effectively rectifying various forms of curved text and improving curved text recognition. To streamline the complexity of the rectification module, Shi et al. [15] introduced a novel rectification network. They implemented a thin-plate spline transformer integrated with an attention mechanism, enabling direct control over curved text points and straightening them into a horizontal orientation. Distinguished from [7, 15], Zhan et al. [21] presented a repetitive rectification framework capable of continuous text region rectification, leading to enhanced text recognition performance. In contrast, references [7, 15] rectified text regions only once. Another significant advancement came from Lee et al. [22], who proposed adaptive 2D positional encoding to bolster feature extraction and address curved images. Their approach yielded remarkable results on public datasets, particularly excelling on curved datasets thanks to the inclusion of self-attention layers in its encoding component, which proved highly beneficial for curved text images. Despite these remarkable contributions to curved text recognition, none of these methods have effectively tackled the challenges posed by low-resolution text images.

Text recognition methods for low-resolution images

To tackle the challenge of low-resolution text images, especially given the limited use of semantic information in most current models for text recognition, researchers have proposed various innovative approaches. Yu et al. [23] introduced a novel Semantic Reasoning Network (SRN) to capture rich semantic context information. SRN effectively combines visual and semantic context information, segmenting characters individually and then aligning them horizontally. This method demonstrated improved performance on publicly available datasets. Inspired by the effectiveness of language models in supervising word order generation, Qiao et al. [24] incorporated a pre-trained language model to supervise encoding and decoding processes. Predictive semantic information training was supervised by a pre-trained word embedding model, making this approach more suitable for addressing low-resolution text recognition. Zhang et al. [25] adopted a text recognition framework with automatic search capabilities. This adaptable framework can be fine-tuned for different datasets, drawing inspiration from neural framework search. For recognizing unsupervised sequence data, Zhang et al. [14] proposed a sequence-to-sequence control adaptive network. They employed gate attention similarity units to adjust attentional information distribution, enhancing feature information. To augment the semantic feature, global context block [26] was introduced into text recognition. In contrast to RNN-based decoder [27], Li et al. [28] presented a transformer-based method, leveraging the self-attention mechanism for Optical Character Recognition (OCR). This approach proved highly effective in addressing low-resolution text images, thanks to the self-attention layers’ exceptional information-carrying capabilities. Wan et al. [29] developed an analytical framework tailored for low-quality images. Their semantic-based segmentation model adeptly utilizes visual features to generate improved results.

However, these two kinds of methods do not explore the effectiveness of the super-resolution network for STR, which is what we wish to focus on in this study. Comparing these methods, we pay attention to the use of the efficient super-resolution module for solving low-resolution images.

Introduction of Chinese super-resolution dataset

Chinese super-resolution dataset annotation

Considering the insufficient availability of Chinese datasets for real scene text recognition, we are aware that while there are 26 English letters, there are more than 5000 commonly used Chinese characters in Chinese scenes. This presents significant challenges for Chinese scene text recognition. Additionally, it is crucial that the training images closely resemble real scene images, including blurred, missing-stroke, and motion-blurred images, to meet the requirements. Traditional single image super-resolution approaches were trained on datasets generated through down-sampling. However, down-sampling is a simplistic and inappropriate method for real low-resolution text recognition. To tackle these challenges, we have developed a novel Chinese super-resolution dataset specifically designed for Chinese scene text recognition.

The proposed Chinese Text Super-Resolution Dataset (CTSD) includes numbers, English letters, and more than 5000 commonly used Chinese characters. The corpus consists of Chinese articles and desensitized Chinese address text, such as specific communities and streets. To create the super-resolution images, we first extract four to ten Chinese characters from the corpus and then paste them onto background images to generate high-resolution images. Subsequently, to generate corresponding low-resolution images, we employ five different methods to transform the high-resolution images into a low-resolution condition. In total, we have generated 5.67 million high-resolution images, resulting in 5.67 million low-resolution images after the transformation process. We randomly divided the 5.67 million low-resolution images into two parts: 70,000 images for the fixed testing sets and the remaining images for the fixed training sets, as shown in Table 1.

thumbnail
Table 1. Statistics of the proposed Chinese Text Super-resolution Dataset (CTSD).

https://doi.org/10.1371/journal.pone.0294943.t001

Chinese super-resolution dataset explanation

First, we produced 5.67 million high-resolution datasets and then processed them in a low-resolution condition by designing the processing functions, such as blur, stroke sticking, left-right motion blur, up-down motion blur, and missing stroke. We use the ‘random.randint()’ function from the Python language to generate numbers from 0 to 9. It returns a random integer. The random.randint (a, b) function uses a random seed and an algorithm to generate a random number, which is then mapped to the range [a, b] through mathematical operations. This ensures that the generated integers are uniformly distributed and within the specified range, hence, it can generate a variety of low-resolution text images with a uniform distribution by selecting the function corresponding to the generated random p. The principle is as follows: (1) where each p can determine one processing function, I denotes that if p ∈ [0, 4], the generated text images will not be processed. Fgau denotes the blur function, Frlmb denotes the right-and-left motion blur function, Fudmb denotes the up-and-down motion blur function, Fsd is the missing stroke function (the stroke is missing a part of it), and Fss is the stroke sticking function indicating that the stroke appears to be in bold.

The motion blur of the image is expressed mathematically as follows: (2) where dst(x, y) is the processed motion blur image, src(x, y) denotes the original images, and (x, y) is the image pixel value.

Details of processing functions

Blurred images.

Blurred images exist widely in real scenes, and many characters are difficult to identify. We use the different kernels to blur the images randomly, and an example is shown in Fig 2(a). We use gaussian blur to generate blurred images. Larger kernel size results in more significant image blurring. In our experiments, we first read the high-resolution images, and apply each kernel to the original image to generate blurred images, and then calculate the RMSE (Root Mean Square Error) between each blurred image and the original image. In this study, we set the kernel size from 3 to 15, and the kernel size is the odd. If the RMSE is greater than the threshold, it indicates that the image is too blurred and cannot be recognized. Too blurred images will cause the model to focus too much on low-level feature information, which will have a negative impact on the recognition branch. In the process of creating our Chinese super-resolution dataset, we found that when the RMSE is greater than 6.5, text images become excessively blurred and difficult to distinguish. Therefore, we set the threshold to 6.5, ensuring that the RMSE of blurred images is less than 6.5.

thumbnail
Fig 2. Examples of our proposed super-resolution Chinese dataset with paired low and high-resolution images.

The upper image is low-resolution image which made by the corresponding function, the image which is in below is high-resolution image.

https://doi.org/10.1371/journal.pone.0294943.g002

Stroke sticking.

Stroke sticking is handled by using image morphology processing methods, and an example is shown in Fig 2(b). First, we use the cv2.getStructuringElement function from OpenCV to create a matrix that defines the shape and size. Then, we apply the created matrix to the input image using the erode function. The erosion operation moves the matrix across the image, it retains the minimum pixel value in the covered area where the matrix shape overlaps with the image. This results in a gradual reduction or elimination of white areas in the image, achieving a stroke sticking effect.

Up-and-down motion blur.

Up-and-down blur is operated by processing the middle row of the kernel numerically as a new convolutional kernel to simulate the up-and-down motion of the images, an example is shown in Fig 2(c). We use the filter2D function from OpenCV to achieve up-and-down directional blurring of images. We set the kernel size from 5 to 11. For each kernel, we perform the following operation: the middle row of the kernel contains positive values, while the values in the other rows are set to 0. We also calculate the Root Mean Square Error (RMSE), and when it exceeds the threshold of 6.5, we increase the kernel values and recalculate.

Right-and-left motion blur.

An image is motion blurred when a large number of images in real scenes are dynamically captured by cameras. The motion blur function makes images blurred in the right-left direction, and these images are processed by operating the middle column of the kernel numerically as a new convolution kernel. An example is shown in Fig 2(d). We use the filter2D function from OpenCV to achieve left-right directional blurring of images. We set the kernel size from 5 and 11. For each kernel, we perform the following operation: the middle column of the kernel contains positive values, while the values in the other columns are set to 0. We also calculate the Root Mean Square Error (RMSE), and when it exceeds the threshold of 6.5, we increase the kernel values and recalculate.

Missing stroke.

Missing stroke is the most challenging condition in real scenes. We simulate the missing stroke characters, which can be processed by morphology algorithms, such as dilation and erosion, to achieve missing stroke in Chinese characters. An example is shown in Fig 2(e). To achieve a missing stroke effect, we first convert the input image to a gray-scale image. Then, we apply a dilation operation to gradually expand the text strokes, followed by an erosion operation to gradually reduce the strokes, thereby simulating the missing effect. This can be accomplished using OpenCV’s cv2.dilate for dilation followed by cv2.erode for erosion. Adjusting parameters such as the size of the structural element and the number of iterations in the dilation and erosion operations allows control over the intensity of the missing stroke effect.

Proposed approach

Overall framework

The framework of proposed model is shown in Fig 3. Our proposed model includes the rectification module, the feature extraction module, FPN, the text recognition branch and super-resolution branch. Text recognition branch contains a sequence module, and a decoding module. Super-resolution branch contains the super-resolution units, each unit includes the dual attention module (DAM) and convolution layers. We design a super-resolution branch (SRB) so that the network can continuously learn the difference between low- and high-resolution features, the capability of text recognition will be improved by enhancing the features resolution. To model the inter-character relationships of the text sequence, we constructed an attention-based decoder in recognition branch.

Rectification module

The framework of rectification network is based on [7], which includes localization module, grid generator and sample generation. The localization module is used to detect the control points in the graph and output the location of the control points. The grid generator calculates the mapping relationship of each point corresponding to the control points and generates the coordinate position of the points {P1⋅⋅⋅Pn}. The sampling generator samples the point positions to generate a rectified graph. The text is firstly rectified into a horizontal layout before feature extraction, which exhibits good performance for non-horizontal layout of text images.

Feature extraction and feature pyramid network

We used Resnet34 [11] to extract text image features. The shape of the last layer of the extraction module is , where W denotes the width of the input image and H denotes the height of the input image. After fusing the high- and low-level semantic information by the FPN network, the fused feature map is adopted as the input of the recognition branch and super-resolution branch, respectively.

SRB

We designed super-resolution branch so that the network can continuously learn the difference between low- and high-resolution features in backward learning, which is shown in Fig 4. SRB contains the designed residual super-resolution units which can grasp abundant information from low-resolution features. Each super-resolution unit includes a convolution block and the dual attention network mechanism (DAM).

thumbnail
Fig 4. Architecture of our proposed super-resolution branch (SRB).

Each SR-unit includes the convolution layers and the proposed dual attention module (DAM). One of SR-UNIT is shown in red dotted box.

https://doi.org/10.1371/journal.pone.0294943.g004

We will describe the dual attention mechanism as follows, as is shown in Fig 5, it contains the residual channel attention (RCA) and the character attention module (CAM).

thumbnail
Fig 5. DAM module.

The left part is the residual channel attention, the right part is the character attention module.

https://doi.org/10.1371/journal.pone.0294943.g005

RCA.

We design the residual channel attention (RCA) as the significant part of our DAM, which aims to fully capture channel-wise dependencies. The structure of the RCA is shown in left part of Fig 5. We here briefly introduce the processing of RCA.

We get the feature map X, Let X = [X1, …, XC], the channel-wise statistics hRC×1 can be acquired by global covariance pooling. The C-th dimension of h is computed as: (3) where Hgcp represents the global covariance pooling [30]. Compared with the commonly used first-order pooling, global covariance pooling explores feature distributions and captures feature statistics higher than the first-order pooling to obtain a highly discriminant representation.

To fully dig feature interdependencies from the aggregated information, a gating mechanism was applied as follows: (4) where Wa and Wb are the weight sets of convolution layer and used to set the channel size of the feature to C/r and C, respectively. f(⋅) and δ(⋅) denote the function of sigmoid and RELU.

Then, we calculate the output μ of RCA as follows: (5) where X denotes the input of RCA and w is obtained by Eq (4), ⊕ denotes the element-wise addition.

CAM.

As text in an image is organized as a sequence, it’s crucial to consider the adjacent row or column of a character when computing correlations. This is particularly important for text sequences that are curved, as it allows us to capture contextual information effectively. To address this, we’ve introduced a Character Attention Module (CAM) within our DAM. The CAM’s structure presented in the right part of Fig 5 enriches the local features X with context information, broadening the view of the context and selectively aggregating the context based on the spatial attention map.

Firstly, we get the 2D feature X, XRC × W × H, the character attention module first applies two convolutional layers with 1 × 1 filters on X to generate two feature maps M and N, respectively, {M, N} ∈ RC′ × W × H, C′ is the channel of feature map. After obtaining the feature maps M and N, it further generates the attention map H by correlation operation, the definition of the correlation operation is as followed: (6) j represents each position of , and Nj was extracted from N which are adjacent context columns or adjacent context rows among position j, the Ni,j is the i-th element of Nj, i ∈ [1, …, n(H + Wn)], φi,jD is the degree of correlation between Mj and Ni,j, DRn(H + Wn) × (W × H), then a softmax was applied on D on the channel dimension to compute the attention map H, n denotes the number of adjacent columns or rows, we set n to 3 in this study. Meanwhile, X generates the V by the convolutional layer with 1x1 filters for feature adaption, and Vj was extracted from V which are adjacent context columns or adjacent context rows among position j, the context information is enriched by aggregation operation, which is defined as follows: (7) where γj is the feature vector in γ at position j, γRC × W × H, Hi,j is the attention map by applying softmax on correlation matrix D.

Hence, the output of DAM: (8) where f denotes the fusion function, μ and γ denote the output of RCA, CAM.

Loss function of SRB.

In order to generate high-resolution text features, we designed Lsr for learning the difference between low- and high-resolution features in backward learning. We applied the L1 loss function for Lsr as follows: (9) where Lsr denotes the super-resolution loss, SR(.) refers to super-resolution branch, i is the indexing of images in a batch, and ILR and IHR are the corresponding low-resolution and high-resolution image features, respectively. In our experiment, we performed low-resolution processing on the raw text dataset as the input of our method, then, the raw text dataset is regarded as the ground-truth.

Additionally, to address the potential issue of slow execution when using the L1 loss in the super-resolution branch, we employed multiple strategies. Firstly, selecting more efficient optimization algorithms, such as Adam or RMSProp, can accelerate the computation of the L1 loss. Additionally, optimizing the model architecture can reduce the computational burden, increasing batch sizes improved computational efficiency (e.g., setting batch size to 64 or more), and employing parallel computing can distribute the computational load. Lastly, adjusting the hyperparameters of the L1 loss can strike a balance between computational efficiency and performance to address potential speed issues. In this study, the hyperparameter for the L1 loss is set to 0.05, as larger weights would lead to a sparser model but increase computational complexity. These strategies are combined and adjusted based on specific circumstances and requirements to enhance the computational efficiency of the model.

Recognition module

Sequence module.

We use a two-layer BiLSTM containing 256 cells to grasp the long-range dependencies in sequence module. The output of the BiLSTM module is a feature sequence h, h = (h1, …, hL), L is the length of feature map.

Decoder.

The decoder module converts feature sequence into character sequence. Reference [12] proves that the STR is a sequence problem and employs the CTC mechanism [31] for decoding, which offers a differentiable mechanism that is insensitive to horizontal characters during end-to-end training, but it cannot model the inter-character relationships of the output and relies on external character language models. Here, we constructed an attention-based decoder as follows:

Firstly, the attention score feature map μ is calculated as follows: (10) where σ is sigmoid function, W1, W2, b1, b2 are the trainable parameters. Then, the attention score feature map μ is normalized to obtain αt,j: (11)

Based on the feature sequence hi generated by sequence module, we obtain the current context feature ct: (12)

At last, we calculate probability of predicting result yt as following Eq (13): (13) (14) where f denotes the function of long short-term memory (LSTM), p is a probability vector after the function softmax at t time. W3, b3 are the trainable parameters. St is the hidden state. The decoder module transforms h feature sequence into a length of T sequence after T step, whose output is (y1⋅⋅⋅ yT).

Loss function of the recognition.

We used the cross-entropy loss function as the objective function of the recognition: (15) where i refers to the index of the sample in a batch, j refers to the index of the number in the label, y denotes the ground truth label, and s denotes the recognition result.

Training strategy and loss function

Generally, a super-resolution network requires paired low-resolution and high-resolution datasets to compute the loss function. However, public text datasets usually only contain one type of dataset. Therefore, we performed low-resolution processing on the raw text dataset as the low-resolution training dataset.

The original datasets were processed under low-resolution conditions using the following processing principles: (16) where i refers to the index order of the image in the training folder: if it is even, it will not be processed; if it is odd, one of the functions will be used for low-resolution processing. Fsd represents the function responsible for introducing missing strokes in text, a common occurrence in real scenes that pose recognition challenges. To simulate missing stroke characters, we employ morphology algorithms like dilation and erosion. Fmb corresponds to motion blur, often observed in images dynamically captured in real scenes. The motion blur function involves processing the middle row of the kernel numerically as a new convolutional kernel to replicate the up-and-down motion in images. Fdu signifies dynamic down-sampling, an alternative to the commonly used fixed fourfold down-sampling method. Fgau denotes the blurring method, where different kernels are randomly applied to blur images, generating diverse types of blurred images relevant to real-world scenarios.

We designed the following loss function for the entire framework: (17) where Lrec and Lsr are the recognition and super-resolution loss functions, respectively, λ is the hyperparamer of Lsr.

Experimental results and ablation study

Commonly used evaluation datasets

To verify the effectiveness and robustness of the proposed model, we analyzed its performance on the following six public text datasets for evaluation:

  1. IIIT5K [32] includes 5000 images, among which 2000 are for training and 3000 are for testing. The text is collected from street view and digital images.
  2. SVT [33] has 647 images for testing. Most of the images in SVT are processed into blurred and low-resolution condition.
  3. IC13 [34], some of which images are inherited from IC03 [35]. Non-alphanumeric images are removed. It includes 1015 testing words.
  4. IC15 [9] was created by Google Glasses and is the most complex dataset in recent years. Most of its images have various distortions and blurred conditions. It contains 2077 cropped words for evaluation. It is the most challenging dataset including large number of low-resolution images.
  5. SVT-P [36] contains 645 cut-out images from Google Street View that are randomly scrambled and have different perspectives. It is also a challenging dataset whose most of images are in the low-resolution condition.
  6. CUTE80 [37] has 288 images for evaluation. It concentrates on curved text recognition. Most of the images have complex backgrounds and perspective scrambling.

Implementation details

Our method was trained on SynText [38], Synth90K [39] and SynthAdd [40] to compare its accuracy against other text recognizers. To ensure uniformity in our experiments, we set the batch size to 512. The images are resized to 64 × 256 and fed into the rectification module. We utilized the ADADELTA optimizer [41] for the recognition branch to optimize the minimum objective and ADAM for the super-resolution branch.

Experiment and analysis

Comparisons on CTSD.

To demonstrate the performance of our proposed model on CTSD, we compared it with Aster [15], SEED [24], and DAN [42]. All of these models were trained and tested on CTSD, and the results are presented in Table 2. Data-Aug and SRB in Table 2 denote the data augmentation function and our proposed super-resolution branch, respectively.

thumbnail
Table 2. Performance of several methods on our proposed CTSD.

https://doi.org/10.1371/journal.pone.0294943.t002

As shown in Table 2, the experiments revealed that adding the super-resolution module improved the recognition rate for Chinese low-resolution images. The comparison between Aster and our model showed that our proposed model produced better recognition results with a 6.0% improvement. Our model also performed better than DAN with a 3.2% improvement in CTSD as shown in Table 2. When we removed the SRB from our model, we can get the finding that our method got the worse performance, the accuracy decreased by 4.8%.

We conducted a comparison between DAN and our proposed model, focusing on the recognition of challenging Chinese images through training on CTSD. To expedite convergence, we utilized DAN’s official website’s pre-training model. We tested several low-quality images from a real Chinese scene dataset provided by Baidu Company in China. As presented in Fig 6, our proposed method had a better performance than DAN in some low-resolution images. DAN wrongly recognized the characters in red font. This finding shows our proposed method with a super-resolution branch can learn better difference between low-resolution features and high-resolution features.

thumbnail
Fig 6. The comparison of models on real Chinese scene dataset.

https://doi.org/10.1371/journal.pone.0294943.g006

We also selected some low-resolution data from CTSD to test the effectiveness of the super-resolution branch, and the results are presented in Fig 7. The below image generated by the super-resolution branch is more clear and higher resolution comparing the upper raw image in each example. This finding shows that SRB can effectively enhance low-resolution features.

thumbnail
Fig 7. The effectiveness of super-resolution branch on Chinese low-resolution images.

https://doi.org/10.1371/journal.pone.0294943.g007

Comparisons on public datasets and ablation study on SRB.

To validate the effectiveness of our proposed super-resolution module, we compared our model with Aster [15], SEED [24], and DAN [42]. The training datasets used were SynText, Synth90K, and SynthAdd, while the testing datasets were SVT, IC13, IC15 and SVTP. IC15 consists of highly blurred images, while SVTP contains a significant number of low-resolution images. As demonstrated in Table 3, our model equipped with the SRB exhibited superior performance compared to these methods on SVT, SVTP, IC15, and IC13. Additionally, when we conducted experiments by removing the SRB from our proposed method, the accuracy on IC15 and SVTP noticeably decreased. This confirms the valuable role of the SRB in our text recognizer, effectively capturing differences between low and high-resolution features, aligning with our intended objective.

We also select some challenging images to evaluate the effect of the dual attention mechanism in super-resolution branch, these images are low-resolution and blurred. The feature heat maps are generated after the last DAM in super-resolution branch. As shown in Fig 8, for better comparison, we showed the heat maps in different colors, evidently, the attention features are more robust concentrated on the text area after DAM, as shown in Fig 8(b). The heat maps are clearer and have more abundant text information compared with the original images, particularly when the image is low-resolution and blurred, such as “STATION” in Example 1, Fig 8(a), which cannot be recognized by our eyes, and the “CRAFT” in Example 3, Fig 8(c), we also find that DAM is also effective in artist characters, such as “welcome” in Example 4, Fig 8(d).

thumbnail
Fig 8. Comparisons between raw images and heat maps generated after DAM in super-resolution branch.

https://doi.org/10.1371/journal.pone.0294943.g008

Model analysis.

We conducted an experiment to analyze model’s speed performance. For the sake of a fair comparison, we selected the NVIDIA TITAN Xp GPU as the experimental equipment, which is the same as in Wang et al. [47]. TextSR [20], which includes a GAN super-resolution branch, consumes 1.16 s per batch (128). The feature map passes through both the generator and discriminator networks resulting in slow training. As shown in Table 4, with the same GPU equipment and batch size, our proposed model consumes 0.92 s per batch (128). Furthermore, we tested the inference speed, as shown in Table 4. Our proposed model achieved the second-best performance in terms of evaluation speed. This suggests that even though we have achieved better recognition performance on multiple datasets and our speed meets the requirements for logistics scene recognition, we still need to focus on further balancing the model’s speed performance and recognition accuracy in future studies.

Our model was trained for 160 thousand iterations on the SynText [38], Synth90K [39] and SynthAdd [40]. As depicted in Fig 9, the two losses converge gradually, demonstrating the crucial roles played by both the SRB and recognition module. On one hand, the SRB reduces the disparity between low-resolution and high-resolution features, and on the other hand, recognition module keeps essential information for global understanding and learns the correct sequence of characters. After 150 thousand iterations, the two curves gradually stabilized.

thumbnail
Fig 9. The changing of recognition loss and super-resolution loss during training.

https://doi.org/10.1371/journal.pone.0294943.g009

Our model comprises two primary tasks: one involves the SRB for super-resolution learning, and another is the recognition module for recognition learning. We employed distinct learning rates for these modules, the learning rate is a crucial parameter in neural networks. Throughout our experiments, we observed that a learning rate of 1 performed well for task A, whereas a learning rate of 0.0001 proved effective for task B. To delve deeper into this, we conducted multiple sets of experiments, varying learning rates from 0.001 to 0.0001 for the SRB and from 0.8 to 1.25 for the recognition module. Our evaluations encompassed challenging datasets, including SVTP, CUTE80, and IIIT5K, with results presented in Fig 10.

thumbnail
Fig 10. The 3D diagram displays the results of our model under different learning rates on three datasets.

The X-axis represents the learning rates of SRB, the Y-axis represents the learning rates of the recognition module, and the Z-axis represents the accuracy of the dataset.

https://doi.org/10.1371/journal.pone.0294943.g010

By observing the recognition accuracy of the SVTP, CUTE80, and IIIT5K datasets (Fig 10), we observed that the model achieved the highest performance across all three datasets when using a learning rate of 0.0001 for the SRB and 1.25 for the recognition module. Consequently, we selected these values, 0.0001 for SRB and 1.25 for the recognition module, as the optimal choices for our model.

Experiments on real express sheet images.

Due to the rapid growth of the e-commerce industry, there has been a significant impact on the development of the logistics industry. We applied our technology to real express images to evaluate the effectiveness of our method.

We start by using our text detector to identify key information on express sheet images. To safeguard customer privacy, we obscure sensitive details like names and phone numbers. In Fig 11, despite slight image blurring, our model accurately predicts vital information, including sender and recipient details (phone numbers, names, addresses), showcasing its robustness.

thumbnail
Fig 11. The recognition of blurred image.

The area inside the blue box is the detected text region in the left side, and the right side is the recognition result corresponding to the text area one-to-one.

https://doi.org/10.1371/journal.pone.0294943.g011

For demonstrating our model’s ability with low-resolution express sheet images (as seen in Fig 12), we first employ the text detector to locate key regions. We then crop and input these regions into our text recognizer, as displayed in Fig 12. Despite challenges, such as low resolution and operator interference, our model accurately predicts phone numbers and Chinese addresses for both sender and recipient. Although one character is missing, as indicated in the red box in the right part of Fig 12, our model performs well with challenging characters. These results demonstrate our model’s effectiveness in handling low-resolution express sheets in the logistics industry.

thumbnail
Fig 12. The recognition of low-resolution image.

Some sensitive information was mosaicked. The left side is the detected logistics sheet image, and the right side is the recognition result corresponding recognition result for each text area.

https://doi.org/10.1371/journal.pone.0294943.g012

Ablation of the hyperparameter λ.

We study the different hyperparameter of λ in Eq (17) to testify its influence on our proposed method. We set the λ from 0.001 to 1.5, and discussed the effect of different values of λ on the recognition accuracy. We selected two public datasets, SVTP and IIIT5K as the evaluation datasets. The results are shown in Fig 13, it can obviously be seen that the model reaches the best state when the value of λ reaches 0.5, when the value of λ is greater than 1, the accuracy starts to decrease. It indicates that increasing the value of λ can effectively improve text recognition performance, and the super-resolution branch indeed improves the quality of low-resolution images. However, larger λ value will make the sharing hidden layers concentrate more on low-level images, which has a negative impact on text recognition, especially when λ is greater than 1.

thumbnail
Fig 13. The result of our model under different value of λ.

The X coordinate represents the different values of λ, Y coordinate represents the accuracy of the datasets. (a) denotes the data of SVTP, (b) denotes the data of IIIT5K.

https://doi.org/10.1371/journal.pone.0294943.g013

We also choose low-resolution and blurred images from the publicly available dataset CUTE80 to test the impact of different λ values on image enhancement. As is shown in Fig 14, it is found that when the λ equals to 0.01, it shows the bad performance. When λ equals to 0.1, it can be found that SRB can learn the features of the image, when λ equals to 0.5, it is obvious that the inferred image is clearer and higher resolution than the original image.

Comparisons with excellent methods.

As evident from Fig 15, our model outperforms SEED [24] in recognizing challenging scenes. Despite the low-resolution and curved nature of these images, our model learns richer feature dependencies and effectively models the inter-character relationships within the text sequence. In contrast, SEED incorrectly predicted characters, as highlighted in red font.

thumbnail
Fig 15. The comparison of models on low-resolution public images.

https://doi.org/10.1371/journal.pone.0294943.g015

Through a comparative analysis of our model with recent outstanding methods across four datasets, namely SVT, IC13, IC15, and SVTP, our proposed model demonstrates superior performance, particularly on two low-resolution datasets (SVTP and IC15). We attribute this achievement to the pivotal role played by our proposed SRB in recognizing low-resolution images and FPN in enhancing semantic information. The visual features extracted by the CNN complement global semantic information, while the attention-based decoder learns relationships between inner characters. Fig 16 illustrates the average accuracy metric across these publicly available datasets, consistently demonstrating our model’s superior performance. Furthermore, Table 5 underscores our model’s outstanding results, particularly on SVT, IC15, and SVTP, with exceptional performance on low-resolution datasets SVTP and IC15.

thumbnail
Table 5. Comparisons with excellent models on public datasets.

https://doi.org/10.1371/journal.pone.0294943.t005

Conclusion

We have developed a novel text recognition model that excels on publicly available datasets, particularly when dealing with low-resolution images. Our proposed super-resolution branch effectively enhances low-resolution features, enabling us to tackle the challenges presented by low-resolution logistics images. The application of our technology holds the potential to significantly elevate the intelligence level within the logistics industry. In future research, we intend to incorporate a transformer-based decoder, as observed in recent studies, and compare its performance with the attention-based decoder employed in this study. Furthermore, we will explore the implementation of a multi-scale stage attention mechanism which was widely applied in various low-level vision tasks, with the potential to yield further performance enhancements.

References

  1. 1. Borisyuk F, Gordo A, Sivakumar V. Rosetta: Large Scale System for Text Detection and Recognition in Images. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. London, UK: ACM; 2018. p. 71–79.
  2. 2. Cheng Z, Bai F, Xu Y, Zheng G, Pu S, Zhou S. Focusing Attention: Towards Accurate Text Recognition in Natural Images. In: 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE; 2017. p. 5086–5094.
  3. 3. Lee CY, Osindero S. Recursive Recurrent Nets with Attention Modeling for OCR in the Wild. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Vegas, NV, USA: IEEE; 2016. p. 2231–2239.
  4. 4. Liu W, Chen C, Wong KY. Char-Net: A Character-Aware Neural Network for Distorted Scene Text Recognition. Proceedings of the AAAI Conference on Artificial Intelligence. 2018;32(1).
  5. 5. Wang J, Hu X. Gated Recurrent Convolution Neural Network for OCR. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, et al., editors. Advances in Neural Information Processing Systems. vol. 30. Curran Associates, Inc.; 2017.
  6. 6. Liu W, Chen C, Wong KY, Su Z, Han J. STAR-Net: A SpaTial Attention Residue Network for Scene Text Recognition. In: Procedings of the British Machine Vision Conference 2016. York, UK: British Machine Vision Association; 2016.
  7. 7. Shi B, Wang X, Lyu P, Yao C, Bai X. Robust Scene Text Recognition with Automatic Rectification. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Vegas, NV, USA: IEEE; 2016. p. 4168–4176.
  8. 8. Jaderberg M, Simonyan K, Zisserman A, kavukcuoglu k. Spatial Transformer Networks. In: Cortes C, Lawrence N, Lee D, Sugiyama M, Garnett R, editors. Advances in Neural Information Processing Systems. vol. 28. Curran Associates, Inc.; 2015.
  9. 9. Karatzas D, Gomez-Bigorda L, Nicolaou A, Ghosh S, Bagdanov A, Iwamura M, et al. ICDAR 2015 competition on Robust Reading. In: 2015 13th International Conference on Document Analysis and Recognition (ICDAR). Tunis, Tunisia: IEEE; 2015. p. 1156–1160.
  10. 10. Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint. 2014;.
  11. 11. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Vegas, NV, USA: IEEE; 2016. p. 770–778.
  12. 12. Shi B, Bai X, Yao C. An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017;39(11):2298–2304. pmid:28055850
  13. 13. Baek J, Kim G, Lee J, Park S, Han D, Yun S, et al. What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea: IEEE; 2019. p. 4714–4722.
  14. 14. Zhang Y, Nie S, Liu W, Xu X, Zhang D, Shen HT. Sequence-To-Sequence Domain Adaptation Network for Robust Text Image Recognition. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE; 2019. p. 2735–2744.
  15. 15. Shi B, Yang M, Wang X, Lyu P, Yao C, Bai X. ASTER: An Attentional Scene Text Recognizer with Flexible Rectification. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2019;41(9):2035–2048. pmid:29994467
  16. 16. Liu Z, Li Y, Ren F, Goh WL, Yu H. SqueezedText: A Real-Time Scene Text Recognition by Binary Convolutional Encoder-Decoder Network. Proceedings of the AAAI Conference on Artificial Intelligence. 2018;32(1).
  17. 17. Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y. Residual Dense Network for Image Super-Resolution. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE; 2018. p. 2472–2481.
  18. 18. Lim B, Son S, Kim H, Nah S, Lee KM. Enhanced Deep Residual Networks for Single Image Super-Resolution. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Honolulu, HI, USA: IEEE; 2017. p. 1132–1140.
  19. 19. Wang X, Yu K, Wu S, Gu J, Liu Y, Dong C, et al. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In: Lecture Notes in Computer Science. Springer International Publishing; 2019. p. 63–79.
  20. 20. Wang W, Xie E, Sun P, Wang W, Tian L, Shen C, et al. TextSR: Content-Aware Text Super-Resolution Guided by Recognition. arXiv preprint. 2019;.
  21. 21. Zhan F, Lu S. ESIR: End-To-End Scene Text Recognition via Iterative Image Rectification. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE; 2019. p. 2054–2063.
  22. 22. Lee J, Park S, Baek J, Oh SJ, Kim S, Lee H. On Recognizing Texts of Arbitrary Shapes with 2D Self-Attention. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle, WA, USA: IEEE; 2020. p. 2326–2335.
  23. 23. Yu D, Li X, Zhang C, Liu T, Han J, Liu J, et al. Towards Accurate Scene Text Recognition With Semantic Reasoning Networks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA: IEEE; 2020. p. 12110–12119.
  24. 24. Qiao Z, Zhou Y, Yang D, Zhou Y, Wang W. SEED: Semantics Enhanced Encoder-Decoder Framework for Scene Text Recognition. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA: IEEE; 2020. p. 13525–13534.
  25. 25. Zhang H, Yao Q, Yang M, Xu Y, Bai X. AutoSTR: Efficient Backbone Search for Scene Text Recognition. In: Computer Vision—ECCV 2020. Springer International Publishing; 2020. p. 751–767.
  26. 26. Cao Y, Xu J, Lin S, Wei F, Hu H. GCNet: Non-Local Networks Meet Squeeze-Excitation Networks and Beyond. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Seoul, Korea: IEEE; 2019. p. 1971–1980.
  27. 27. Graves A, Liwicki M, Fernandez S, Bertolami R, Bunke H, Schmidhuber J. A Novel Connectionist System for Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2009;31(5):855–868. pmid:19299860
  28. 28. Li B, Tang X, Qi X, Chen Y, Xiao R. Hamming OCR: A Locality Sensitive Hashing Neural Network for Scene Text Recognition. arXiv preprint. 2020;.
  29. 29. Wan Z, Zhang J, Zhang L, Luo J, Yao C. On Vocabulary Reliance in Scene Text Recognition. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA: IEEE; 2020. p. 11422–11431.
  30. 30. Dai T, Cai J, Zhang Y, Xia ST, Zhang L. Second-Order Attention Network for Single Image Super-Resolution. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE; 2019. p. 11057–11066.
  31. 31. Graves A, Fernández S, Gomez F, Schmidhuber J. Connectionist temporal classification. In: Proceedings of the 23rd international conference on Machine learning—ICML’06. Pittsburgh, PA, USA: ACM Press; 2006. p. 369–376.
  32. 32. Mishra A, Alahari K, Jawahar C. Scene Text Recognition using Higher Order Language Priors. In: Procedings of the British Machine Vision Conference 2012. British Machine Vision Association; 2012.
  33. 33. Wang K, Babenko B, Belongie S. End-to-end scene text recognition. In: 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE; 2011. p. 1457–1464.
  34. 34. Karatzas D, Shafait F, Uchida S, Iwamura M, i Bigorda LG, Mestre SR, et al. ICDAR 2013 Robust Reading Competition. In: 2013 12th International Conference on Document Analysis and Recognition (ICDAR). Washington, DC, USA: IEEE; 2013. p. 1484–1493.
  35. 35. Lucas SM, Panaretos A, Sosa L, Tang A, Wong S, Young R. ICDAR 2003 robust reading competitions. In: 2003 7th International Conference on Document Analysis and Recognition (ICDAR). Edinburgh, UK: IEEE Comput. Soc; 2003. p. 682–687.
  36. 36. Phan TQ, Shivakumara P, Tian S, Tan CL. Recognizing Text with Perspective Distortion in Natural Scenes. In: 2013 IEEE International Conference on Computer Vision. Sydney, NSW, Australia: IEEE; 2013. p. 569–576.
  37. 37. Risnumawan A, Shivakumara P, Chan CS, Tan CL. A robust arbitrary text detection system for natural scene images. Expert Systems with Applications. 2014;41(18):8027–8048.
  38. 38. Gupta A, Vedaldi A, Zisserman A. Synthetic Data for Text Localisation in Natural Images. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE; 2016. p. 2315–2324.
  39. 39. Jaderberg M, Simonyan K, Vedaldi A, Zisserman A. Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition. arXiv preprint. 2014;.
  40. 40. Li H, Wang P, Shen C, Zhang G. Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition. Proceedings of the AAAI Conference on Artificial Intelligence. 2019;33(1):8610–8617.
  41. 41. Zeiler MD. ADADELTA: An Adaptive Learning Rate Method. arXiv preprint. 2012;.
  42. 42. Wang T, Zhu Y, Jin L, Luo C, Chen X, Wu Y, et al. Decoupled Attention Network for Text Recognition. Proceedings of the AAAI Conference on Artificial Intelligence. 2020;34(07):12216–12224.
  43. 43. Liao M, Zhang J, Wan Z, Xie F, Liang J, Lyu P, et al. Scene Text Recognition from Two-Dimensional Perspective. Proceedings of the AAAI Conference on Artificial Intelligence. 2019;33(01):8714–8721.
  44. 44. Xie Z, Huang Y, Zhu Y, Jin L, Liu Y, Xie L. Aggregation Cross-Entropy for Sequence Recognition. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE; 2019. p. 6531–6540.
  45. 45. Luo C, Jin L, Sun Z. MORAN: A Multi-Object Rectified Attention Network for scene text recognition. Pattern Recognition. 2019;90:109–118.
  46. 46. Wang Y, Lian Z. Exploring Font-independent Features for Scene Text Recognition. In: Proceedings of the 28th ACM International Conference on Multimedia. Seattle, WA, USA: ACM; 2020. p. 1900–1920.
  47. 47. Wang C, Liu CL. Multi-branch guided attention network for irregular text recognition. Neurocomputing. 2021;425:278–289.
  48. 48. Xiao Z, Nie Z, Song C, Chronopoulos AT. An extended attention mechanism for scene text recognition. Expert Systems with Applications. 2022;203:117377.