Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Adaptive enhancement of shoulder x-ray images using tissue attenuation and type-II fuzzy sets

Abstract

Shoulder X-ray images typically have low contrast and high noise levels, making it challenging to distinguish and identify subtle anatomical structures. While existing image enhancement techniques are effective in improving contrast, they often overlook the enhancement of sharpness, especially when amplifying blurring and noise. These techniques may improve detail contrast but fail to maintain overall image clarity and the distinction between the target and the background. To address these issues, we propose a novel image enhancement method aimed at simultaneously improving both the contrast and sharpness of shoulder X-ray images. The method integrates automatic tissue attenuation techniques, which enhance the image contrast by removing non-essential tissue components while preserving important tissues and bones. Additionally, we apply an improved Type-II fuzzy set algorithm to further optimize image sharpness. By simultaneously enhancing contrast and sharpness, the method significantly improves image quality and detail distinguishability. When tested on certain images from the MURA dataset, the proposed method achieved the best or second-best results, outperforming five no-reference image quality assessment metrics. In comparative studies, the method demonstrated significant performance advantages over 10 contemporary X-ray image enhancement algorithms and was validated through ablation experiments to confirm the effectiveness of each module.

Introduction

Shoulder X-ray imaging plays a crucial role in the healthcare industry, widely used for disease diagnosis, prevention, and in medical research and education, significantly enhancing the accuracy of disease diagnoses and improving patient health outcomes. However, shoulder radiographs often come with several flaws, such as noise [1], low contrast [2], blurriness [3], and a limited dynamic range [4]. Noise may mask essential details critical for early disease detection. Low contrast complicates the differentiation of tissue types, potentially leading to misinterpretation [5]. Blurriness reduces the sharpness necessary for identifying organ boundaries or lesions, and a restricted dynamic range can cause loss of detail in both overly dark and bright areas [6]. These quality issues collectively risk diagnostic accuracy, potentially leading to delayed or inappropriate treatment decisions, thereby affecting patient outcomes [7]. Image enhancement techniques that reduce noise, blur, and artifacts can assist doctors in accurately identifying and interpreting details within X-ray images, thereby increasing diagnostic precision.

Methods based on tissue attenuation [8] have been extensively applied for X-ray image enhancement, reducing nonessential information and enhancing the contrast of key tissues, organs, and bones. This enables doctors to quickly recognize and analyze structures critical for diagnosis. Nevertheless, these methods often overlook the characteristics of low sharpness and high noise levels inherent in X-ray images. Given the low sharpness, crucial details may be lost, and the presence of significant noise levels can interfere with the performance of tissue attenuation-based enhancement algorithms, leading to suboptimal enhancement results. Overcoming these challenges may require the integration of multiple image processing techniques to effectively enhance the images.

Recently, significant advancements have been made in traditional and deep learning-based methods for X-ray image enhancement. For traditional enhancement methods, Koonsanit et al. [9] proposed an adaptive histogram method to enhance image detail, texture, and local contrast. Veluchamy et al. [10] proposed an adaptive gamma correction with weighted histogram distribution method to improve contrast while preserving natural color and richer details. Fu et al. [11] explicated an X-ray image enhancement algorithm based on an improved Retinex-Net [12], effectively enhanced visual clarity, contrast, and detail while reducing noise in X-ray images. For deep learning methods, Ma et al. [13] proposed a bi-directional GAN [14] with local structure and illumination constraints, improving medical image quality for various clinical tasks. Madmad et al. [15] elucidated a method for enhancing X-ray images by separating them into local textures and smooth parts using convolutional neural networks trained with synthetic data, resulting in improved visualization. Zhong et al. [16] proposed a multi-scale attention generative adversarial network for enhancing medical images by addressing issues of illumination distribution, texture details, and artifact noise, achieving superior enhancement results. However, these advancements often overlook the substantial improvements that can be achieved by incorporating sharpness enhancement techniques.

We propose a dual-image enhancement strategy that simultaneously improves image contrast and sharpness, significantly enhancing overall image quality. First, an automatic tissue attenuation technique preserves essential tissues, organs, and bones while removing unnecessary components, thus enhancing contrast. Then, a sharpness enhancement algorithm based on type-II fuzzy sets is applied to improve image sharpness. The combination of these strategies leads to a significant improvement in X-ray image quality. Notably, our method outperforms many traditional X-ray image enhancement techniques, as shown in Fig 1, where it clearly displays skeletal structures and key tissues. In summary, our research makes four key contributions to the field of shoulder X-ray imaging:

  • We proposed a tissue attenuation-based contrast enhancement technique that removes non-essential tissue components, enhancing image contrast by preserving important tissues and bones.
  • We proposed a type-II fuzzy set-based sharpness enhancement algorithm, significantly improving shoulder X-ray image sharpness by emphasizing subtle, diagnostic-critical details.
  • Our approach combines sharpness and contrast enhancements, significantly boosting image quality on both fronts.
  • We present the validation of the MURA dataset in the field of shoulder X-ray image enhancement. Our proposed method was evaluated using five no-reference image quality assessment metrics and achieved optimal or sub-optimal performance by comparing 10 image enhancements.
thumbnail
Fig 1. Our proposed enhancement technique exhibits enhanced performance in elevating X-ray image quality relative to established methodologies, including LCA [47], FCE [44], FCCE [43], and ECE [41].

https://doi.org/10.1371/journal.pone.0316585.g001

This study presents a comprehensive method aimed at addressing issues of low contrast and poor sharpness in shoulder X-ray images. The method combines multiple processing strategies to enhance both contrast and sharpness, thereby improving support for medical diagnosis. The paper begins by introducing the importance of shoulder X-ray imaging and discussing the challenges of image enhancement under low sharpness conditions. It then reviews traditional methods and deep learning techniques used in X-ray image processing, providing a theoretical foundation for the proposed approach. The methods section details the application of tissue attenuation techniques and type-II fuzzy sets in image enhancement. The experiments section explains qualitative and quantitative experiments validate the effectiveness of the method, and an ablation study further confirms the contribution of each module. The discussions section summarizes the current research and addresses its limitations. Finally, the conclusion summarizes the research findings, emphasizes the improvement in medical imaging quality, and looks ahead to future research directions. A complete list of abbreviations used in the paper is provided in Table A1 in S1 Appendix.

Related work

Recent advancements in medical imaging, particularly in X-ray image enhancement, have been driven by the fusion of traditional image processing techniques and deep learning algorithms. Traditional methods, including histogram equalization, retinex theory, gamma correction, and spatial filtering, have been fundamental in adjusting image pixel values to improve contrast and clarity. This has enabled medical practitioners to make more accurate diagnoses by enhancing image visualization and highlighting critical details. On the other hand, deep learning approaches like convolutional neural networks [17] and generative adversarial networks [18] have revolutionized the field by automatically extracting and utilizing complex patterns from large datasets to produce clearer and more contrasted images. In summary, image enhancement techniques are crucial for downstream tasks in computer vision. Through effective image enhancement and preprocessing, low-quality medical images can be improved, helping professionals make critical decisions based on the images [19].

Traditional medical image enhancement methods

In the realm of medical imaging, particularly X-ray image enhancement, traditional enhancement methods have played an important role in addressing various challenges associated with image quality. A range of techniques have been developed to tackle issues like excessive brightness, foggy appearance due to dense tissue, non-uniform luminance, low contrast, and the preservation of natural colors and details, such as Huang et al. [8] addressed images with excessive brightness and foggy appearance due to dense tissue by proposing a decomposition of X-ray images into tissue and detail components. They enhanced visual contrast through adaptive tissue attenuation and dynamic range stretching and implemented an ensemble framework to fuse images from dark and bright regions. Their results effectively highlighted organs, bone structures, and significant small details. For DICOM [20] images, which often suffer from non-uniform luminance and low contrast, few methods are suitable, Zhao et al. [21] introduced a luminance-level modulation and gradient modulation technique, compressing the luminance range to improve visual quality and enhance image details to increase contrast while avoiding staircase artifacts and outperforming current leading methods. However, its reliance on luminance and gradient modulation could lead to challenges in preserving fine details in images with highly variable intensity distributions, requiring careful tuning for optimal performance in differentclinical scenarios. Tao et al. [22] designed a retinex-based image enhancement framework that using region covariance filter at different scales to estimate the illumination and adopted contrast-limited adaptive histogram equalization, non-local means filter, and guided filter to enhance the contrast, eliminate the noise, and increase the details of original images respectively. To address the issue of insufficient contrast in cardiac MRI videos, Jabbar et al. [23] proposed a method based on fuzzy image technology. Through steps such as fuzzification, membership function adjustment, and defuzzification, the method effectively enhances the video contrast, demonstrating improved performance compared to other approaches.

Recent traditional image enhancement methods for X-rays have evolved to better address the multifaceted challenges of medical imaging. Yadav et al. [24] utilized an entropy curve and homomorphic filtering to improve image contrast and detail without amplifying noise and artifacts, mitigating the typical under or over enhancement in dark or bright areas. Nevertheless, the complexity of the filtering process may present challenges for real-time application in clinical environments. Jabbar et al. [25] proposed a local fuzzy inference technique to enhance the contrast and visual quality of musculoskeletal ultrasound images, demonstrating improved performance and efficiency, with potential applications as a preprocessing step for tasks like image segmentation and 3D reconstruction. To address noise, improper exposure, and obscured details in digital radiography images, Liu et al. [26] proposed a method using wavelet multiscale decomposition with Shannon–Cosine wavelets. This approach segmented images across frequencies, enhancing diagnostic information while suppressing noise through region-specific attenuation coefficients, effectively improving image clarity and robustness. To deal with low contrast in medical images, Khan et al. [27] proposed the Fast Local Laplacian Filter, which selectively enhances low-contrast areas while preserving edges and fine details, improving visual quality and reducing noise levels to aid in accurate diagnosis.

However, image enhancement based on traditional methods faced several challenges. These methods often led to a trade-off between detailed enhancement and noise reduction, failing to uniformly improve image quality across various regions. They relied heavily on manual adjustments, making the process time-consuming and inconsistent. Traditional approaches also struggle with the diverse and complex nature of medical images, which contain a wide range of contrasts and details critical for diagnosis. Their one-size-fits-all strategy frequently overlooked the nuanced differences between tissues and conditions, potentially masking important diagnostic information. This highlighted the need for more advanced, adaptable solutions capable of addressing the unique challenges of X-ray image enhancement. The various traditional methods for X-ray image enhancement discussed in this section are summarized in Table 1.

thumbnail
Table 1. Summary of traditional image enhancement method for medical images and their applications.

https://doi.org/10.1371/journal.pone.0316585.t001

Deep learning medical image enhancement methods

Building on the foundation set by traditional image enhancement techniques, deep learning strategies have introduced a paradigm shift in medical imaging, particularly by enhancing image quality with a focus on detail preservation and overcoming some limitations of traditional methods. Confronting the challenge of limiting paired samples, Yu et al. [28] developed a fuzzy self-guided structure retention generative adversarial network. This network comprises a self-guided structure retention module and an illumination distribution correction module. The network focuses on preserving essential structural information in nerve fibers, while the illumination distribution correction module harmonizes illumination distribution for clearer medical structure visualization. However, the model may encounter challenges with generalization to datasets beyond nerve fiber images, as its architecture is tailored to specific structural features, potentially limiting its broader applicability. The network produced enhanced images with uniform illumination and rich texture. Addressing the quality of fundus images, Wu et al.[29] introduced a semi-supervised GAN with anatomical structure preservation to navigate around the limitations of specific prior knowledge requirements and generalizability issues plaguing existing enhancement methods. Zhong et al. [16] identified that maintaining texture details and preventing boundary artifact noise is a significant limitation of current enhancement techniques. They proposed the multi-scale attention generative adversarial network, which is designed specifically for medical images and performs well with unpaired datasets, demonstrating notable improvements in image analysis and segmentation tasks. Nevertheless, reliance on unpaired datasets, while advantageous, could also introduce variability in outcomes, making consistent performance across different medical imaging modalities more difficult to achieve. Ma et al. [13] presented the novel structure and illumination constrained GAN, categorizing images by quality and applying constraints that balance overall quality enhancement with detail preservation. Furthermore, Qiu et al. [30] developed an image enhancement method combining curvelet transformations, frequency band broadening, and CNNs, focusing on edge detail and noise reduction in medical images. Their approach, involving artifact mitigation and resolution enhancement, resulted in improved diagnostic quality across multiple imaging modalities. While deep learning methods in medical image enhancement boast significant advantages, such as the ability to learn complex patterns and improve image quality autonomously, their drawbacks include being heavily dependent on the availability and quality of datasets and a lack of interpretability. These methods require large, well-annotated datasets to train effectively, which can be a challenge in medical settings due to privacy concerns and the labor-intensive nature of annotation.

Additionally, deep learning models are often described as "black boxes" because their decision-making processes are not easily understandable, raising concerns about reliability and trustworthiness in critical medical applications. Phan et al. [31] introduced a novel two-stage approach for enhancing the quality and privacy of X-ray medical images. The first stage utilizes generative adversarial networks to effectively denoise the images, enhancing visibility of critical anatomical structures by eliminating noise and artifacts. Subsequently, the method integrates number-theoretic transform polynomial multiplication to accelerate the encryption and decryption processes, ensuring privacy protection without relying on Kyber encryption [32]. Madmad et al. [15] presented a new method for improving X-ray images by separating them into local textures and smooth areas using a dual-branch convolutional neural network. The approach focused on distinguishing the detailed textures from the overall smooth shapes in the images. Trained on synthetic data, this CNN effectively decomposed images into their two key components. The technique emphasized enhancing the texture details, resulting in images that outperformed those produced by traditional methods for high dynamic range visualization, such as tone-mapping algorithms.

However, deep learning for medical image enhancement, such as X-ray imaging, faced key challenges, including a reliance on extensive, annotated datasets that were difficult to obtain due to privacy and expertise requirements. The lack of interpretability in black-box models raised trust issues among healthcare professionals. These methods struggled with generalization, high computational demands, security risks, and biases in training data, highlighting the need for ethical and effective deep learning in medical image enhancement. The deep learning-based methods for medical image enhancement covered in this section are outlined in Table 2.

thumbnail
Table 2. Summary of deep learning-based image enhancement methods for medical images and their applications.

https://doi.org/10.1371/journal.pone.0316585.t002

Method

In addressing the inherent challenges presented by X-ray imaging, the low contrast and low sharpness which complicate diagnosis, we proposed a two-fold enhancement approach. Firstly, our method leverages tissue attenuation to mitigate the interference of non-critical tissue components that may otherwise overlap with vital organs or tissues, obscuring crucial details and leading to diagnostic ambiguities. By selectively attenuating certain tissue elements, we preserve and accentuate the fine details, thereby enriching the contrast of the image. Concurrently, the sharpness of X-ray images plays a critical role, as it underpins the ability of healthcare professionals to discern subtle yet significant anomalies within these internal structures. To ameliorate issues of sharpness in X-ray images, we incorporated an algorithm rooted in type-II fuzzy set for sharpness enhancement, effectively increasing the perceptibility of the imagery. This comprehensive methodology forms the bedrock of our proposed method, integrating contrast enhancement with sharpness improvement to support more effective diagnostic imaging. This section mainly includes three parts. First, a method to enhance the contrast of low dynamic shoulder X-ray image.

We then proposed a method for improving the sharpness of medical images. Finally, we introduced the overall structure of the algorithm, the overall flowchart of proposed method is shown in Figs 2 and 3 shows the algorithm’s flowchart for the proposed method. To summarize, Algorithm 1 outlines the key implementation steps of our ensemble framework for X-ray enhancement.

Algorithm 1. Pseudocode of the shoulder X-ray image enhancement procedure.

  1. Algorithm 1 X-ray image enhancement procedure
  2. Input: Raw X-ray image I(y)
  3. Output: Enhanced X-ray image L(y)
  4. Algorithm
  5. 0: Compute the normalized image Inor(y) using Eq (1)
  6. 1: Calculate the local maximum G(y) and local minimum T(y) based on Eqs (2) and (3)
  7. 2: For each pixel p in Inor(y)
  8. 3: Compute the removable factor β(p) using Eq (4)
  9. 4: Calculate removable component R(p) based on
  10. 5: Adjust brightness consistency using the factor ψ(p)
  11. 6: Generate contrast enhanced image E(p) by Eq (7)
  12. 7: End
  13. 8: Apply a type-II fuzzy set on E(y), yielding f(y) using Eq (8).
  14. 9: Calculate mean μ and standard deviation σ by Eqs (9) and (10).
  15. 10: For each region r r in f(y):
  16. 11: Calculate the Hamacher T-Conorm upper u(r) by Eq (11)
  17. 12: Compute lower w(r) limits by Eq (12)
  18. 13: Apply the T-Conorm to enhance sharpness, resulting in T(r) by Eq (13)
  19. 14: Apply gamma correction for clarity, producing L(r) using Eq (14)
  20. 15: End
  21. 16: Output: Return the enhanced X-ray image L(y)
thumbnail
Fig 2. The flowchart of the proposed method outlines a two-stage image enhancement algorithm.

The first stage focuses on contrast enhancement. The second stage aims at sharpness enhancement.

https://doi.org/10.1371/journal.pone.0316585.g002

thumbnail
Fig 3. Algorithm flow chart.

The input image is processed through stages, yielding an enhanced final output.

https://doi.org/10.1371/journal.pone.0316585.g003

Contrast enhancement module based on tissue attenuation

To resolve the low contrast issue previously mentioned, we propose a method predicated on tissue attenuation, inspired by tissue attenuation techniques [8], that selectively removes specific tissue components, thus retaining the detailed components and ultimately enhancing the image contrast. First, we perform normalization on the input image. The purpose of this process is to standardize the data scale, reduce the dependency on the input range, and improve the model’s ability to generalize new data. The normalized function is defined as: (1)

In this context, y represents the spatial domain index, uniquely identifying the position of each pixel. Imax and I(y) respectively denote the highest grayscale value of the entire image and the grayscale value at a local window in the input image. D(y) represents the detail component, while R(y) signifies the removable component. The calculation of local maximum and minimum is based on a windowed range, where Loy represents the windowed area determined by the position of pixel y. G(y) denotes the local maximum, as defined by Eq (2), while the local minimum T(y) is determined by Eq (3). Notably, T(y) represents the highest attenuation content within tissues, encompassing both tissue and fat components.

(2)(3)

The removal factor is used to determine the proportion of tissue components to be eliminated. It does not require a manual setting but can be ascertained solely using the local maxima and minima. Local maximum and minimum values from Eqs (2) and (3) are utilized to generate the removable factor β(y) as Eq (4), where var(T(y)) is the variance of the local minimum. This approach enables the identification of the most suitable value for the removable factor based on the image, thereby preserving the essential tissue components within the detailed component.

(4)

The removable component R(y) is jointly determined by the highest attenuation tissue content T(y) and the removal factor β(y), with the objective of eliminating non-essential tissue components while retaining critical tissue elements. The calculation formula for the removable component is as follows: (5)

Parametric adjustment for brightness consistency which denoted by ψ(y) [33] is computed by using Eq (6), is utilized for automatic brightness control. Brightness control is a crucial factor in adjusting the visual effects of images during contrast enhancement. It helps maintain the natural appearance of the image while improving the visibility of details and the dynamic range of the image.

(6)

The contrast-enhanced X-ray image E(y) is calculated using Eq (7), representing the image after contrast enhancement. Here Inor(y) represents the normalized image, while G(y) and R(y) denote the local maximum value component and the removal component of the image, respectively. Additionally, ψ(y) serves as the brightness control parameter.

(7)

Type-II fuzzy set sharpness enhancement module

To address the issue of insufficient sharpness in X-ray images as discussed earlier, we have employed a sharpness enhancement algorithm based on type-II fuzzy sets, which effectively improves the sharpness of X-ray images. Firstly, we have conducted a fuzzification operation. F(y) represents the image after normalization, where E(y) denotes the grayscale value of the pixel determined by the spatial index y, max and min respectively represent the maximum and minimum values of the given image. The method for normalization is referenced in Eq (8): (8)

Then we compute the upper bound denoted by μ(y) and lower bound denoted by w(y) of the type-II fuzzy membership function, it is first necessary to calculate the mean μ and variance σ of the normalized image, following the formulas provided in Eqs (9) and (10). Inspired by robust optimization method [34] that utilizes mean and variance, we apply similar measures to quantify the distribution of the normalized image. The calculation formulas for the mean and standard deviation are as follows: (9) (10)

Here, fi corresponds to the specific positional element in f(y), and n represents the number of elements fi. The Hamacher T-Conorm upper limit u(y), based on the gamma correction method proposed by Kallel et al. [35] can be found in Eq (11), where α is a hyperparameter used to adjust the level of contrast enhancement, and falls within the range of 0 < α < 1. It is particularly noteworthy that when α > 0.6, the enhancement of sharpness becomes relatively pronounced.

(11)

The method for calculating the lower limit is based on the contrast stretching technique from [36] and does not require the introduction of any hyperparameters. The formula for calculating the lower bound is as follows: (12)

To compute the image enhanced for sharpness, one must first calculate the Hamacher t-conorm, with the method as follows: (13)

Subsequently, T(y) requires the use of a transformer-based gamma correction method [37] to obtain the enhanced image L(y), with the formula as below: (14)

Experimental results

In this section, we explore the experiments setting to evaluate the efficacy of our proposed method, including the datasets used, the benchmark methods compared against, and the no-reference image quality assessment metrics utilized. Our method is compared against 10 leading and conventional X-ray image enhancement techniques to illustrate the advantages of our proposed strategy. Moreover, through ablation studies, we validate the effectiveness of our proposed modules. The experimental results revealed that our proposed method shows promising performance in terms of accuracy and efficiency, achieving the best or second-best results in five evaluation metrics. The ablation study further underscores the importance and impact of a sharpness enhancement module based on a type-II fuzzy set in improving image quality, coupled with a contrast enhancement algorithm grounded in tissue attenuation, significantly advances the clarity and detail representation in X-ray

images. Finally, we also conducted generalization tests to validate the effectiveness and robustness of the proposed method on X-ray images of different body parts and across various scenarios.

Experiments setting

In this section, we meticulously describe the setting of our experiments. Specifically, we conduct a comprehensive validation of our method on the MURA [38] dataset using five no-reference image quality evaluation metrics, comparing it against 10 traditional methods based on X-ray image enhancement. Additionally, we also conducted ablation experiments and generalization tests. Below, we detail the no-reference image dataset MURA used in this study, the comparative methods selected, and the no-reference image quality evaluation metrics employed for assessment.

Dataset.

The MURA dataset, developed by Stanford University, is an extensive library of musculoskeletal radiographic images, containing 40,561 images manually labeled as normal or abnormal by professional radiologists. This dataset is not only vast in scale, covering a wide range of musculoskeletal conditions, but also supports automated analysis, anomaly detection, and research in medical image processing with its high-quality and diverse data. The public availability of MURA has encouraged participation from researchers worldwide, fostering the development of medical image processing technologies. The decision to use the MURA dataset for X-ray image enhancement research is based on the abundance of precisely annotated images it offers, ensuring data reliability and practicality. Additionally, the variety of musculoskeletal conditions represented in the dataset adds to its diversity, providing a rich resource for research and an ideal platform for evaluating the robustness of our proposed method. On this basis, we randomly selected 12 images from the shoulder subset of the MURA dataset as our experimental data, aiming to deepen the understanding and exploration of image enhancement techniques in the medical field.

Compared methods.

In our comparative experiments, we utilized the MURA dataset as a benchmark to conduct a comprehensive evaluation of 10 traditional X-ray image enhancement techniques. These selected methods represent a variety of different techniques within the traditional medical image enhancement domain, including CECI [39], CLAHE [40], ECE [41], EGIF [42], FCCE [43], FCE [44], GC [45], HLIPSCS [46], LCA [47], RCEA [48]. The objective of this comparison was to delve into a detailed analysis and assessment of these effective methods in improving the quality of shoulder X-ray images, thereby offering deeper insights and evaluations to the field of medical image processing.

No-reference image quality assessment metrics.

The quality of shoulder X-ray images primarily depends on factors such as contrast, dynamic range, spatial resolution, noise, and artifacts [49], all of which influence the clarity and diagnostic utility of the images. To effectively evaluate X-ray image enhancement techniques, we utilized multiple no-reference image quality assessment metrics that comprehensively quantify these aspects of visual quality. Among them, the Blind Image Quality Model Evaluator (BIQME) [50] focuses on the naturalness of images, with high scores indicating that the image is closer to a natural visual experience, demonstrating the effectiveness of image processing technologies in maintaining natural colors and textures. The Fog Aware Density Evaluator (FADE) [51] specializes in assessing the clarity and visibility of images, where lower scores signify a low haze density, thus indicating clearer images. The Average Gradient (AG) [52] measures the detail contrast of an image, with higher scores meaning better visibility and detail contrast. Information Entropy (IE) [53] evaluates the richness of information in an image, where an increase in score reflects an increase in image details and information content. The MA [54] metric provides a comprehensive assessment of the visual quality of images produced by the algorithm, with higher scores indicating higher visual quality and better alignment with human visual perception standards. These metrics collectively consider multiple dimensions such as the naturalness, clarity, contrast, and color richness of images, providing a comprehensive set of quantitative evaluation methods for shoulder X-ray image enhancement techniques. This includes both the prominence of image details and the overall visual experience, making the evaluation more objective and comprehensive.

Qualitative and quantitative comparisons

Qualitative comparisons.

Fig 4 presents a qualitative comparison of the proposed method with 10 other methods. (a) This line denotes the input data. Turning to the (b) series, the images appear flat in contrast. This lack of contrast may flatten subtle grayscale differences, thereby suppressing detail recognition. This is especially detrimental in cases where differentiation of tissues with close density is required, potentially leading to insufficient diagnosis of certain disease states. In the (c) series, despite a certain enhancement of edges that might benefit the delineation of structures, the effects of over enhancement are evident, leading to non-physiological edges and unnatural texture contrasts, which may mislead diagnosis. In the (d) series, the image processing appears to overly emphasize contrast between light and dark areas, resulting in overexposed or excessively suppressed regions. This not only introduces visual noise but may also mask subtle pathological changes. The black blotches observed in the (f) and (g) series may indicate algorithmic deficiencies in processing low-light or low-signal areas. Information in these regions might be crucial but obscured by incorrect enhancement strategies, affecting the overall readability of the image. The foggy effect in the (h) series further illustrates the complexity of image processing. Such fogging could be due to excessive smoothing by denoising algorithms, thereby reducing image contrast and clarity, making diagnosis more difficult. In the (i) series, the balanced treatment of brightness and contrast might aim to reduce the previously mentioned problems, yet there are still areas of blurring due to over-processing, which is unacceptable in diagnostic situations requiring precise measurements. As for the (j) series, while maintaining a richness of detail, managing brightness and darkness is extremely important. Overly bright areas in the image may lead to information loss, while overly dark areas may conceal potential pathological features. Finally, the enhancement techniques displayed in these X-ray images, as shown in (k), may not have adequately balanced the enhancement of contrast with the preservation of details, resulting in over-smoothing or increased noise in some images. Taking the (l) series as an example, we can observe that the algorithm has enhanced contrast and sharpness, improving the visualization of anatomical structures, particularly at the boundaries between bone and soft tissue, while avoiding degradation in image quality such as foggy blurring or artifact generation. This clarity enhancement has been achieved by optimizing image contrast and sharpness while preserving the natural textures and shadows of the tissues, which is vital for identifying potential subtle pathologies. Finally, the initial feedback from the radiologist indicates that the method significantly enhances the visibility of fine structures and anatomical details. The improvement in contrast and edge delineation, particularly in the lung and rib areas, contributes to more accurate diagnosis of subtle abnormalities.

thumbnail
Fig 4. Comparison of X-ray enhancement results between the proposed method and 10 other traditional X-ray enhancement techniques.

The images are arranged from left to right, with Image 1 to Image 10 representing the results of the respective traditional enhancement techniques. (a) Input, (b) CECI [39], (c) CLAHE [40], (d) ECE [41], (e) EGIF [42], (f) FCCE [43], (g) FCE [44], (h) GC [45], (i) HLIPSCS [46], (j) LCA [47], (k) RCEA [48], and (l) proposed method.

https://doi.org/10.1371/journal.pone.0316585.g004

Quantitative comparisons.

In our comprehensive evaluation of the effects on X-ray image enhancement, contrast optimization, and noise reduction, a series of quantitative metrics were employed for analysis. Specifically, by applying BIQME, FADE, AG, IE, and MA as five evaluation standards on the MURA dataset, we compared the performance of 10 different image processing methods. The analysis revealed that our proposed strategy demonstrated promising results, achieving the top performance in four out of five evaluation metrics and securing second place in the remaining metric on the selected dataset. Further validation on a larger dataset is necessary to confirm its overall effectiveness. This performance emphatically demonstrates the superior capabilities of our method in enhancing image quality, adjusting contrast, and reducing noise. In comparative experiments, the performance of our proposed method against the other 10 methods on the BIQME metric, as shown in Tables 3 and 8, indicates that our method ranked best based on the scores of eight pictures recorded and obtained the best overall score. Moreover, in the assessment of the FADE metric, our method achieved the best results across all twelve images shown in Table 4 and ranked first in the overall average score in Table 8. Under the AG metric, our approach also performed best for the 12 images displayed in Table 5 and led the overall average scores in Table 8. In the IE metric evaluation, our method performed well, securing the best scores for three out of twelve evaluated images, the remaining nine images are either second in ranking or very close to it in Table 6 and ranking second in the overall score summary of Table 8. In the assessment of the MA metric, our algorithm demonstrated exceptional performance, achieving the best or second-best results in 10 evaluated images in Table 7, and ranking first in the overall score summary of Table 8. Additionally, Table 9 presents the computational complexity analysis, comparing the execution times of various algorithms. The proposed method, while delivering enhanced shoulder X-ray image quality, has a relatively higher computational cost, which may limit its real-time applicability. Future work will focus on optimizing the algorithm to improve efficiency without compromising enhancement performance.

thumbnail
Table 3. Comparative assessment of BIQME scores across diverse methods for distinct images. The best result is bold, and the second-best result is underlined.

https://doi.org/10.1371/journal.pone.0316585.t003

thumbnail
Table 4. Comparative assessment of FADE scores across diverse methods for distinct images. The best result is bold, and the second-best result is underlined.

https://doi.org/10.1371/journal.pone.0316585.t004

thumbnail
Table 5. Comparative assessment of AG scores across diverse methods for distinct images. The best result is bold, and the second-best result is underlined.

https://doi.org/10.1371/journal.pone.0316585.t005

thumbnail
Table 6. Comparative assessment of IE scores across diverse methods for distinct images. The best result is bold, and the second-best result is underlined.

https://doi.org/10.1371/journal.pone.0316585.t006

thumbnail
Table 7. Comparative assessment of MA scores across diverse methods for distinct images. The best result is bold, and the second-best result is underlined.

https://doi.org/10.1371/journal.pone.0316585.t007

thumbnail
Table 8. Comparison of average metric value between different methods. The best result is bold, and the second-best result is underlined.

https://doi.org/10.1371/journal.pone.0316585.t008

thumbnail
Table 9. Comparison of execution time for different X-ray image enhancement methods.

https://doi.org/10.1371/journal.pone.0316585.t009

Ablation study

This section provides a detailed analysis of the effectiveness of the contrast and sharpness enhancement modules in our proposed algorithm on the MURA dataset through an ablation study. By removing the sharpness enhancement module, we assess its overall impact on the approach, effectively illustrating the significant role of the sharpness enhancement module in enhancing algorithm performance. By also removing the contrast enhancement module, we further dissect intricacies of the system, revealing the integral part this module plays alongside sharpness enhancement in boosting the algorithm’s efficacy.

Firstly, a qualitative comparative experiment was conducted. In Fig 5, "(a) Input" refers to the original image, characterized by its relatively blurry effect and low contrast, where key skeletal and tissue sections are not sufficiently highlighted, potentially limiting the effectiveness of precise image analysis. "(b) -w/o REM" represents the method after removing the sharpness enhancement module, showing the enhancement results post-removal, where the contrast and detail of images are significantly improved compared to the original, making bone information and key tissues clearer. Despite this, compared to the full model, this type of enhancement might still fall short in terms of overall image sharpness. "(c) -w/o CEM" illustrates the method following the removal of the contrast enhancement module. Compared to the full model, it is visually apparent that images generated without CEM lack the clarity needed to distinguish between bones and tissues effectively. "(d) Full Model" represents our proposed complete method, where we can observe an all-encompassing enhancement effect; not only are the details of bones and key tissues more pronounced, but the overall sharpness and clarity of the image are also significantly improved, greatly enhancing the observer’s ability to discern details. This is especially critical when conducting a detailed interpretation of images. Table 10 meticulously presents the results of an ablation experiment conducted on the MURA dataset, where key modules were removed to compare the performance of various model configurations using five non-reference evaluation metrics, aiming to precisely reveal the specific contributions of each independent module to the overall model performance. Specifically, "-w/o REM" demonstrates the performance of model without the sharpness enhancement module, while "-w/o CEM" indicates the effects after the removal of the contrast enhancement module. It is evident from Table 10 that the presence of sharpness and contrast enhancement modules is crucial for improving model performance. For instance, the full model scores the lowest on the FADE metric, signifying its superior performance in enhancing image quality. At the same time, BIQME, AG, IE, and Ma metrics all score the highest in the full model, further validating the importance of each module in performance enhancement. Moreover, the significantly high score on the AG metric particularly highlights the superior performance of the full model compared to the ablated models lacking any key module, underscoring the profound impact of the synergistic work between modules in the model, especially in terms of enhancing image clarity and contrast.

thumbnail
Fig 5.

Comparative visualization reveals: (a) input low-contrast images; (b) -w/o REM results with enhanced contrast but no sharpness improvement; (c) -w/o CEM outcomes with unclear tissue differentiation; (d) Full Model enhancements showing marked sharpness and clarity improvements.

https://doi.org/10.1371/journal.pone.0316585.g005

thumbnail
Table 10. The results of the ablation experiments of the different modules.

https://doi.org/10.1371/journal.pone.0316585.t010

Generalization test

The generalization test for the shoulder X-ray image enhancement algorithm focuses on thoroughly evaluating its robustness and ability to generalize across various types of X-ray images, extending beyond shoulder X-rays to images of other anatomical structures. This is critical for assessing the algorithm’s applicability and reliability in real-world clinical scenarios, where X-ray images of different body parts are frequently analyzed. To conduct this evaluation, the proposed method was applied to a broader range of X-ray images by randomly sampling from the MURA dataset, specifically selecting images of the humerus, forearm, wrist, and fingers. These images were then processed using the proposed enhancement technique to test its effectiveness across diverse anatomical regions. As depicted in Fig 6, the top row presents the original X-ray images, while the bottom row displays the enhanced images produced by the algorithm. The results clearly demonstrate that the proposed method significantly improves the visibility of critical bone structures and effectively enhances image contrast and sharpness, even in complex anatomical regions. This showcases the method’s potential for widespread clinical utility, providing reliable and consistent enhancement across various X-ray imaging scenarios, thus improving diagnostic accuracy and aiding medical professionals in more precise interpretations.

thumbnail
Fig 6. Generalization test of the proposed method on X-ray images of various body parts, including the humerus, forearm, wrist, and fingers, taken from the MURA dataset.

https://doi.org/10.1371/journal.pone.0316585.g006

Discussion

In this study, we proposed a novel enhancement method for shoulder X-ray images, improving both sharpness and contrast by integrating tissue attenuation techniques with a type-II fuzzy set-based algorithm. Our method significantly enhances image quality, particularly in low-contrast and low-sharpness images, preserving details and highlighting critical features, which improves diagnostic and clinical utility. When compared to 10 traditional enhancement methods using the partial MURA dataset, our method showed superior results, especially in challenging shoulder X-ray scenarios. It enhances contrast and sharpness, making bone structures clearer and aiding in the detection of potential lesions.

However, the method has limitations. It may not perform well on extremely blurry or noisy images, and its computational cost increases with larger datasets, limiting the scalability for real-time or large-scale applications. Additionally, the algorithm’s performance is sensitive to parameter settings, requiring fine-tuning for optimal results. Automating parameter selection or developing adaptive methods would be crucial for consistent performance across various imaging scenarios.

Conclusion

In this study, we proposed a comprehensive enhancement framework for shoulder X-ray images, which improves image contrast through tissue attenuation and enhances sharpness using a type-II fuzzy set-based approach. We tested the method on several shoulder X-ray images from the MURA dataset, comparing it with 10 traditional enhancement techniques, achieving the best results. Additionally, the generalization tests confirmed its effectiveness for X-ray images of other body parts, highlighting its broad applicability. This method improves shoulder X-ray image quality, particularly in low-contrast and detail-lacking images, aiding radiologists in more accurate analyses and better diagnostic accuracy. Future work could focus on optimizing the algorithm for improved efficiency and exploring its potential in more complex medical imaging scenarios. Our research aims to enhance diagnostic accuracy, reduce follow-up rates, and accelerate clinical decision-making.

Supporting information

S1 Dataset. Partial MURA dataset for experimental evaluation.

https://doi.org/10.1371/journal.pone.0316585.s001

(ZIP)

References

  1. 1. Goyal Bhawna, Agrawal Sunil, and Sohi BS. Noise issues prevailing in various types of medical images. Biomedical & Pharmacology Journal, 11(3):1227, 2018.
  2. 2. Wu Hao-Tian, Huang Qi, Cheung Yiu-ming, Xu Lingling, and Tang Shaohua. Reversible contrast enhancement for medical images with background segmentation. IET Image Processing, 14(2):327–336, 2020.
  3. 3. Zohair Al-Ameen and Ghazali Sulong. Deblurring computed tomography medical images using a novel amended landweber algorithm. Interdisciplinary Sciences: Computational Life Sciences, 7(3):319–325, 2015.
  4. 4. Munadi K., Muchtar K., Maulina N., & Pradhan B. (2020). Image enhancement for tuberculosis detection using deep learning. IEEE Access, 8, 217897–217907.
  5. 5. Geng M., Meng X., Yu J., Zhu L., Jin L., Jiang Z., … & Lu Y. (2021). Content-noise complementary learning for medical image denoising. IEEE transactions on medical imaging, 41(2), 407–419.
  6. 6. Caseneuve G., Valova I., LeBlanc N., & Thibodeau M. (2021). Chest X-Ray image preprocessing for disease classification. Procedia Computer Science, 192, 658–665.
  7. 7. Rahman T., Khandakar A., Qiblawey Y., Tahir A., Kiranyaz S., Kashem S. B. A., … & Chowdhury, M. E. (2021). Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Computers in biology and medicine, 132, 104319. pmid:33799220
  8. 8. Huang Ching-Chun and Nguyen Manh-Hung. X-ray enhancement based on component attenuation, contrast adjustment, and image fusion. IEEE Transactions on Image Processing, 28(1):127–141, 2018. pmid:30130186
  9. 9. Koonsanit Kitti, Thongvigitmanee Saowapak, Pongnapang Napapong, and Thajchayapong Pairash. Image enhancement on digital x-ray images using n-clahe. In 2017 10th Biomedical engineering international conference (BMEICON), pages 1–4. IEEE, 2017.
  10. 10. Veluchamy Magudeeswaran and Subramani Bharath. Image contrast and color enhancement using adaptive gamma correction and histogram equalization. Optik, 183:329–337, 2019.
  11. 11. Fu Kailun, Wang Jue, and Li Bo. X-ray image enhancement based on improved retinex-net. In 2022 7th International Conference on Automation, Control and Robotics Engineering (CACRE), pages 194–198. IEEE, 2022.
  12. 12. Wei Chen, Wang Wenjing, Yang Wenhan, and Liu Jiaying. Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560, 2018.
  13. 13. Ma Yuhui, Liu Jiang, Liu Yonghuai, Fu Huazhu, Hu Yan, Cheng Jun, et al. Structure and illumination constrained gan for medical image enhancement. IEEE Transactions on Medical Imaging, 40(12):3955–3967, 2021. 1. pmid:34339369
  14. 14. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
  15. 15. Madmad Tahani, Delinte Nicolas, and Christophe De Vleeschouwer. Cnn-based morphological decomposition of x-ray images for details and defects contrast enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2170–2180, 2021.
  16. 16. Zhong Guojin, Ding Weiping, Chen Long, Wang Yingxu, and Yu Yu-Feng. Multi-scale attention generative adversarial network for medical image enhancement. IEEE Transactions on Emerging Topics in Computational Intelligence, 2023.
  17. 17. Guo R., Xu Y., Tompkins A., Pagnucco M., & Song Y. (2024). Multi-degradation-adaptation network for fundus image enhancement with degradation representation learning. Medical Image Analysis, 97, 103273. pmid:39029157
  18. 18. Ma Y., Liu J., Liu Y., Fu H., Hu Y., Cheng J., … & Zhao Y. (2021). Structure and illumination constrained GAN for medical image enhancement. IEEE Transactions on Medical Imaging, 40(12), 3955–3967. pmid:34339369
  19. 19. Ren W., Bashkandi A. H., Jahanshahi J. A., AlHamad A. Q. M., Javaheri D., & Mohammadi M. (2023). Brain tumor diagnosis using a step-by-step methodology based on courtship learning-based water strider algorithm. Biomedical Signal Processing and Control, 83, 104614.
  20. 20. Mustra Mario, Delac Kresimir, and Grgic Mislav. Overview of the dicom standard. In 2008 50th International Symposium ELMAR, volume 1, pages 39–44. IEEE, 2008.
  21. 21. Zhao Chenyi, Wang Zeqi, Li Huanyu, Wu Xiaoyang, Qiao Shuang, and Sun Jianing. A new approach for medical image enhancement based on luminance-level modulation and gradient modulation. Biomedical Signal Processing and Control, 48:189–196, 2019.
  22. 22. Tao Fuyu, Yang Xiaomin, Wu Wei, Liu Kai, Zhou Zhili, and Liu Yiguang. Retinex-based image enhancement framework by using region covariance filter. Soft Computing, 22:1399–1420, 2018.
  23. 23. Jabbar S. I., & Aladi A. Q. (2019, October). Automated Contrast Enhancement of the MRI Video Imaging. In 2019 IEEE 13th International Conference on Application of Information and Communication Technologies (AICT) (pp. 1–5). IEEE.
  24. 24. Priyanshu Singh Yadav, Bhupendra Gupta, and Subir Singh Lamba. A new approach of contrast enhancement for medical images based on entropy curve. Biomedical Signal Processing and Control, 88:105625, 2024.
  25. 25. Jabbar S. I., Aladi A. Q., Day C., & Chadwick E. (2021). A new method of contrast enhancement of musculoskeletal ultrasound imaging based on fuzzy inference technique. Biomedical Physics & Engineering Express, 7(5), 055003. pmid:34161931
  26. 26. Liu Meng, Mei Shuli, Liu Pengfei, Gasimov Yusif, and Cattani Carlo. A new x-ray medical-image-enhancement method based on multiscale shannon–cosine wavelet. Entropy, 24(12):1754, 2022. pmid:36554159
  27. 27. Khan S. S., Khan M., & Alharbi Y. (2023). Fast Local Laplacian Filter Based on Modified Laplacian through Bilateral Filter for Coronary Angiography Medical Imaging Enhancement. Algorithms, 16(12), 531.
  28. 28. Yu Yu-Feng, Zhong Guojin, Zhou Yi, and Chen Long. Fs-gan: Fuzzy self-guided structure retention generative adversarial network for medical image enhancement. Information Sciences, 642:119114, 2023.
  29. 29. Wu Hao-Tian, Cao Xin, Gao Ying, Zheng Kaihan, Huang Jiwu, Hu Jiankun, and Tian Zhihong. Fundus image enhancement via semi-supervised gan and anatomical structure preservation. IEEE Transactions on Emerging Topics in Computational Intelligence, 2023.
  30. 30. Qiu Tao, Wen Chang, Xie Kai, Wen Fang-Qing, Sheng Guan-Qun, and Tang Xin-Gong. Efficient medical image enhancement based on cnn-fbb model. IET Image Processing, 13(10):1736–1744, 2019.
  31. 31. Quoc Bao Phan, Linh Nguyen, Tuy Tan Nguyen, and Dinh C Nguyen. Privacy-preserving x-ray image enhancement: A gan-cybersecurity-based approach. In 2024 IEEE International Conference on Consumer Electronics (ICCE), pages 1–4. IEEE, 2024.
  32. 32. Azouaoui Melissa, Bronchain Olivier, Hoffmann Clément, Kuzovkova Yulia, Schneider Tobias, and Standaert FrançoisXavier. Systematic study of decryption and re-encryption leakage: the case of kyber. In International Workshop on Constructive Side-Channel Analysis and Secure Design, pages 236–256. Springer, 2022.
  33. 33. Kumar Sonu and Ashish Kumar Bhandari. Automatic tissue attenuation-based contrast enhancement of lowdynamic x-ray images. IEEE Transactions on Radiation and Plasma Medical Sciences, 6(5):574–582, 2021.
  34. 34. Zou Y., Dai X., Li W., & Sun Y. (2015). Robust design optimisation for inductive power transfer systems from topology collection based on an evolutionary multi‐objective algorithm. IET Power Electronics, 8(9), 1767–1776.
  35. 35. Kallel Fathi, Sahnoun Mouna, Ahmed Ben Hamida, and Khalil Chtourou. Ct scan contrast enhancement using singular value decomposition and adaptive gamma correction. Signal, Image and Video Processing, 12:905–913, 2018.
  36. 36. Asokan Anju, Daniela E Popescu, J Anitha, and D Jude Hemanth. Bat algorithm based non-linear contrast stretching for satellite image enhancement. geosciences, 10(2):78, 2020.
  37. 37. Huang Zhenghua, Fang Hao, Li Qian, Li Zhengtao, Zhang Tianxu, Sang Nong, and Li Yongjiu. Optical remote sensing image enhancement with weak structure preservation via spatially adaptive gamma correction. Infrared Physics & Technology, 94:38–47, 2018.
  38. 38. Rajpurkar Pranav, Irvin Jeremy, Bagul Aarti, Ding Daisy, Duan Tony, Mehta Hershel, Yang Brandon, Zhu Kaylie, Laird Dillon, Robyn L Ball, et al. Mura: Large dataset for abnormality detection in musculoskeletal radiographs. arXiv preprint arXiv:1712.06957, 2017.
  39. 39. Zohair Al-Ameen. Contrast enhancement for color images using an adjustable contrast stretching technique. International Journal of Computing, 17(2):74–80, 2018
  40. 40. Zuiderveld Karel. Contrast limited adaptive histogram equalization. In Graphics gems IV, pages 474–485. 1994.
  41. 41. Huang Shih-Chia, Cheng Fan-Chieh, and Chiu Yi-Sheng. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE transactions on image processing, 22(3):1032–1041, 2012. pmid:23144035
  42. 42. Lu Zongwei, Long Bangyuan, Li Kang, and Lu Fajin. Effective guided image filtering for contrast enhancement. IEEE Signal Processing Letters, 25(10):1585–1589, 2018.
  43. 43. Anil Singh Parihar Om Prakash Verma, and Khanna Chintan. Fuzzy-contextual contrast enhancement. IEEE Transactions on Image Processing, 26(4):1810–1819, 2017. pmid:28186893
  44. 44. Kumar Reman and Ashish Kumar Bhandari. Fuzzified contrast enhancement for nearly invisible images. IEEE Transactions on Circuits and Systems for Video Technology, 32(5):2802–2813, 2021.
  45. 45. Poynton Charles. Digital video and HD: Algorithms and Interfaces. Elsevier, 2012.
  46. 46. Zohair Al-Ameen Zainab Khalid Younis, and Shamil Al-Ameen. Hlipscs: A rapid and efficient algorithm for image contrast enhancement. International Journal of Computing and Digital System, 2021.
  47. 47. Zohair Al-Ameen and Awni Hasan Zaman. A low-complexity algorithm for contrast enhancement of digital images. International Journal of Image, Graphics and Signal Processing, 14(2):60, 2018.
  48. 48. Asmaa Y Albakri and Zohair Al-Ameen. Rapid contrast enhancement algorithm for natural contrast-distorted color images. AL-Rafidain Journal of Computer Sciences and Mathematics, 15(2):73–90, 2021.
  49. 49. Tompe A., & Sargar K. (2020). X-ray image quality assurance.
  50. 50. Gu Ke, Tao Dacheng, Qiao Jun-Fei, and Lin Weisi. Learning a no-reference quality assessment model of enhanced images with big data. IEEE transactions on neural networks and learning systems, 29(4):1301–1313, 2017. pmid:28287984
  51. 51. Lark Kwon Choi Jaehee You, and Alan Conrad Bovik. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Transactions on Image Processing, 24(11):3888–3901, 2015.
  52. 52. Liu Risheng, Ma Long, Zhang Jiaao, Fan Xin, and Luo Zhongxuan. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10561–10570, 2021.
  53. 53. JA Núñez, PM Cincotta, and FC Wachlin. Information entropy: an indicator of chaos. In Chaos in Gravitational N-Body Systems: Proceedings of a Workshop held at La Plata (Argentina), July 31–August 3, 1995, pages 43–53. Springer, 1996.
  54. 54. Ma Chao, Yang Chih-Yuan, Yang Xiaokang, and Yang Ming-Hsuan. Learning a no-reference quality metric for single-image super-resolution. Computer Vision and Image Understanding, 158:1–16, 2017.