Figures
Abstract
This research study focuses on calculating five entropy measures (Shannon, Rényi, Havrda-Charvát, Arimoto, and Tsallis) for the Burr XII distribution, utilizing progressive Type-II censoring. The study derives maximum likelihood estimators for each entropy measure and constructs two-sided confidence intervals. A comprehensive simulation study evaluates the performance of these estimators across various sample sizes and parameter settings. The results demonstrate that the proposed methods achieve low bias and variance under different censoring schemes, with coverage probabilities consistently close to the nominal level. Additionally, an application to the Wisconsin Breast Cancer Database highlights the practical utility of the entropy estimators in distinguishing between benign and malignant cases. Among the measures evaluated, the Rényi, Havrda-Charvát entropy measures exhibited the most robust performance in both simulation and real life data analysis.
Citation: Helu A, Samawi H (2025) Assessing uncertainty: A study of entropy measures for Burr XII distribution under progressive Type-II censoring. PLoS One 20(8): e0329086. https://doi.org/10.1371/journal.pone.0329086
Editor: Omar El Deeb, The University of Warwick, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
Received: March 26, 2025; Accepted: July 10, 2025; Published: August 8, 2025
Copyright: © 2025 Helu, Samawi. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The Wisconsin Breast Cancer (Diagnostic) Dataset (WBCD) is publicly available. Researchers can download it from the UCI Machine Learning Repository at this link: (https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic). The dataset has all the features we mentioned in our analysis (like “perimeter-worst”) and includes “Diagnosis” (benign or malignant) as the main point of reference.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Entropy, introduced by Shannon in 1948, is a fundamental concept for quantifying uncertainty in random variables. It measures the average information content of a variable, where higher entropy indicates greater uncertainty and a wider spread in the probability distribution. Conversely, lower entropy suggests a more concentrated distribution with reduced uncertainty. The concept of entropy has been widely applied across various scientific disciplines. For example, [1] explored its significance in the insurance industry, particularly in evaluating risk and the severity of extreme events, where greater entropy correlates with increased variability and potential losses. In reliability studies, [2–5] highlighted the relevance of entropy in assessing the uncertainty of failure distributions, noting that higher entropy is often associated with less reliable outcomes. Furthermore, entropy-based methodologies have been employed in fields such as neurobiology, statistics, cryptography, quantum computing, linguistics, and bioinformatics, as reported by [6–8]. These applications underscore the importance of entropy in both theoretical and applied research. Recent studies have also demonstrated the utility of entropy estimators in neuroscience and biomedical diagnosis, particularly in analyzing electrophysiological signals such as EEG data [9,10]. These applications highlight the versatility of entropy-based methods across diverse fields. However, the present study focuses on the methodological development and evaluation of entropy measures for lifetime data analysis under progressive Type-II censoring, particularly within the context of reliability modeling and failure time uncertainty.
1.1 The Burr XII distribution and its applications
The Burr XII distribution is an extremely useful model, especially when dealing with non-monotonic failure rates, such as unimodal or bathtub-shaped failure rates, which are widespread in reliability and biological research. Unlike the Weibull distribution, which may be the first choice for analysing monotonic failure rates due to its negatively and positively skewed density shape, the Burr XII distribution provides more flexibility in modeling non-monotonic failure rates.
The Burr XII model is crucial in reliability engineering as it predicts how long a system or component may last. In particular, this is true when only partial data is available as a result of early terminations of tests. Various medical outcomes can be modeled using this framework, commonly used in survival analysis. Finance uses the Burr XII distribution to analyze extreme events, such as financial crises, thereby making it an essential resource for managing risk. Likewise, environmental scientists use the Burr XII distribution to study complex patterns of nature, such as rainfall patterns, which are crucial for the effective management of natural resources. Moreover, the cumulative distribution function and the reliability function of the Burr XII distribution have closed forms, allowing for simplified percentile and likelihood calculations under censored data. For more on Burr XII and its applications, see [11–13].
Let X be a random variable from Burr XII() distribution. The probability density function (pdf) and cumulative distribution function (cdf) of X are given by:
where, and
are the shape parameters. The corresponding survival function
and hazard function h(t) are:
Figs 1 and 2 illustrate the hazard function of the Burr XII distribution with different parameters. As shown, the hazard function is non-monotonic and can accommodate various shapes. According to [14,15], due to the shapes of h(t), the Burr XII distribution is widely used in quality control, reliability analysis, biological and life test studies, as well as in economics and industrial tests.
1.2 Entropy measures of the Burr XII distribution
Shannon entropy: Let X be a random variable with the pdf given in Eq (1). The Shannon entropy of the Burr XII distribution is defined as:
where, is the digamma function and
is the Euler-Mascheroni constant
.
Shannon entropy is one of the earliest and most commonly used entropy measures. This measure has proven effective in the study of communication systems. However, one significant disadvantage of the Shannon measure, particularly in the continuous case, is that it may be negative for certain probability distributions, complicating its interpretation as a measure of uncertainty. Various generalizations have been proposed to address the limitations of Shannon entropy.
Rényi entropy: [16] introduced a generalized entropy by extending the concepts of uncertainty and randomness. The Rényi entropy, which generalizes Shannon’s entropy, is parameterized by a single parameter, p. As p approaches unity, it converges to the familiar Shannon entropy. A notable property of Rényi entropy is that in algorithms requiring entropy maximization, Rényi’s entropy can be substituted directly for Shannon’s, as both entropies reach their maximum under the same conditions [17]. The Rényi entropy is calculated using the following formula:
where, p is a parameter that leads to a positive entropy. The R ényi entropy is known as the quadratic entropy when p = 2.
where is the complete gamma function. Eq 8 exists if and only if
, which is always satisfied if
, a condition considered in the subsequent simulation study.
Havrda and Charvát entropy: [18] proposed an extension of Rényi’s entropy, known as Havrda and Charvát (HC) entropy, which is defined as:
HC entropy is often used in the context of fuzzy set theory and information retrieval, offering robustness in cases with incomplete or uncertain information.
Arimoto (A) entropy: [19] suggested another generalization of Shannon entropy, defined as:
Tsallis (T) entropy: [20] generalized Shannon entropy and defined it as:
Several researchers have studied entropy estimates for different life distributions. [11,21] used progressive censoring to investigate entropy in the Burr XII distribution based on ranked set sampling. [22] addressed entropy estimates for the Rayleigh distribution using doubly generalized Type-II hybrid censoring. [23] considered entropy estimators for the inverse Lomax distribution via a multiple censored scheme. [24,25] used non-informative prior to estimate the Shannon entropy of the Lomax distribution. Additionally, [26] evaluated the performance of maximum likelihood and Bayesian models under progressively censored samples. [27] applied Bayesian methods to Shannon entropy for the Burr XII distribution using progressive Type-II censored data. [28,29] evaluated the accuracy of estimators using entropy measures for the Log-Logistic distribution. [30,31] also examined the Shannon entropy of the inverse Weibull distribution under progressive first-failure censoring, comparing credible intervals to asymptotic intervals. [32] used Monte Carlo simulations to illustrate Shannon’s entropy estimates for progressively censored Maxwell distributions. In this study, we explore these entropy measures within the context of the Burr XII distribution, which is known for its versatility in modeling non-monotonic failure rates.
1.3 Progressive Type-II censoring scheme
In reliability studies, manufacturers seek to understand the failure time distributions of their products to ensure they meet high-quality standards and have a long lifespan. This understanding is typically gained through life-testing experiments. However, such experiments can be challenging due to time constraints and associated costs, especially when the experiment must be stopped before all items fail. The data obtained from such prematurely ended experiments are known as censored samples. Censoring is a practical technique used in life-testing experiments to save time and money, although it can lead to losing potentially essential data.
In the context of life-testing experiments, two widely recognized approaches are Type-I and Type-II censoring. With Type-I censoring, the experiment is terminated once a specific time has passed. In contrast, Type-II censoring involves ending the experiment only after a certain number of failures. However, these conventional schemes do not allow for the removal of surviving items during the experiment other than at the final termination point.
To address these limitations, progressive Type-II censoring allows for the intermediate removal of surviving units throughout the experiment, providing a more flexible and practical approach. With this type of censoring, n independent and identically distributed items are simultaneously placed in a life-testing experiment, and only m (<n) failures are fully observed. The experiment progresses through m stages: after the first failure occurs, a predetermined number of surviving units, R1, are randomly selected from the remaining n–1 units, leaving n−1−R1 surviving items. In the event that the second item fails, the sample becomes n−2−R1, and another sample of size R2 is randomly selected and removed from the remaining units. This process continues until m failures are observed, and all the remaining surviving units are removed from the experiment. It is assumed that the lifetimes of these n units are independent and identically distributed with a common distribution function F(x). Further, n, m, and the censoring scheme
are all predetermined. If
, then Rm = n−m, corresponding to Type-II censoring. If
, then m = n represents the complete data set.
This method is particularly useful when units need to be removed from the test due to practical considerations, such as reallocating them for other purposes or observing degradation in a different context. Progressive Type-II censoring, a generalized form of the traditional Type-II censoring method, empowers researchers to adjust it to suit various experimental needs. If all Ri values are set to zero, the scheme reduces to standard Type-II censoring, where the experiment continues until all m failures are observed without any intermediate removals.
The flexibility and practicality of progressive Type-II censoring have led to increased interest in this method, especially with the availability of high-speed computing. This technological advancement facilitates extensive simulation studies and more efficient data collection, making researchers feel optimistic and forward-thinking about the future of reliability studies. For those interested in a more detailed discussion of progressive censoring schemes, the works of [32] provide a comprehensive overview.
This is the first study, to our knowledge, to specifically investigate these five entropy estimators within the context of the Burr XII distribution using progressively Type-II censored data. Our findings, including analytical expressions for each entropy measure, maximum likelihood estimators (MLEs), two-sided approximate confidence intervals for all five entropy indices, and numerical comparisons, provide valuable insights into the most effective entropy estimator.
This paper is composed of six sections. Sect 2 discusses the maximum likelihood estimation of Burr XII distribution parameters under progressively Type-II censoring, as well as the derivation of maximum likelihood estimators for the five entropy measures: Shannon, Rényi, Havrda-Charvát, Arimoto, and Tsallis. The delta method is employed in Sect 3 to derive asymptotic confidence intervals for each of the five entropy measures. Sect 4 comprehensively compares the entropy estimators through a simulation study, analyzing their performance in bias, variance, coverage probability, and confidence interval length under different censoring schemes. Sect 5 applies the proposed methods to real-life data from the Wisconsin Breast Cancer Database (WBCD), using the perimeter-worst biomarker to assess the uncertainty between benign and malignant patient groups. Finally, Sect 6 concludes the paper by summarizing the key findings and emphasizing the practical applicability of the entropy measures.
2 Maximum likelihood estimation
The maximum likelihood estimation (MLE) method is often regarded as one of the most potent and acceptable approaches for drawing statistical inferences due to its consistency, sufficiency, invariance, and asymptotic efficiency. While MLE can be computationally intensive in some cases, advances in computational power and software have made it more accessible and feasible for complex models and large data sets.
In this section, we focus on the estimation of the parameters of the Burr XII distribution under progressive Type-II censoring scheme. We begin by deriving the MLEs for the shape parameters and
, which are essential for the subsequent analysis of the entropy measures. These estimators form the foundation for calculating the entropy measures and constructing their associated confidence intervals, ensuring that the characteristics of the censoring scheme is accurately reflected in the results.
2.1 Model description
Suppose that n independent units are placed in a life-testing experiment whose lifetimes have Burr XII distribution with parameters and
, with the pdf and cdf as shown in Eqs (1) and (2). The corresponding number of units removed from the test is denoted
. Let
denote the above mentioned m progressively Type-II censoring. For simplicity of notation, we will write Xi to represent Xi:m:n. The likelihood function based on progressively type-II censored sample (see Balakrishnan and Cramer (2014)) is given by:
It is usually easier to maximize the logarithm of the likelihood function rather than the likelihood function. Therefore, the log-likelihood function is given by:
The MLEs of the parameters and
can be obtained by deriving the likelihood function (16) with respect to
and
and equating the normal equations to 0 as follows:
note that from Eqs (17) and (18) we arrive at an optimization problem which is one dimensional. In fact, from (17) we have
when (19) is plugged in (18), the equation will be reduced to a one dimensional nonlinear normal equation of
We use Newton Raphson algorithm to solve for the MLE of . Inserting this value into (19), we obtain the MLE of
.
Now that and
have been estimated, we utilize the invariance property to obtain the MLEs for the entropy measures. Consequently, the MLE estimator of S, denoted by
is obtained by inserting
and
in 6 as follows:
similarly for the other entropy estimators, denoted by and
are obtained in a similar fashion and given below
3 Confidence intervals for entropy measures
The asymptotic confidence intervals (CIs) for entropy measures are derived to quantify the uncertainty of the estimates obtained from the Burr XII distribution under progressive Type-II censoring. These intervals are based on the normal approximation of the maximum likelihood estimators.
To construct these confidence intervals, we apply the delta method. This approach, combined with the observed Fisher information matrix, allows us to estimate the variance and covariance of the entropy measures, providing the necessary components for the CI calculation. For a detailed explanation of the delta method and its applications, see [33].
The asymptotic confidence interval for each entropy measure are obtained by:
where, is the estimated entropy measure,
denotes the upper
th percentile of the standard normal distribution, and
represents the estimated variance of
derived through the delta method.
3.1 Delta method and fisher information matrix
The variance of each entropy estimator is computed using the delta method, which approximates the variance as:
where evaluated at
, and
is the inverse of the observed Fisher information matrix at the estimated parameters
and
. For the computation of the Fisher information matrix, the second derivatives of the log-likelihood function with respect to the parameters
and
are required. These derivatives have been obtained using Mathematica 13 and are detailed in the appendix (see Appendix A).
3.2 Confidence intervals for specific entropy measures
- Shannon Entropy (S): The variance of the Shannon entropy estimator,
, is derived from the Fisher information matrix and partial derivatives of the Burr XII distribution. The confidence interval is then given by:
(26)
- Rényi Entropy (Ren): The confidence interval for R ényi entropy follows a similar approach. The variance,
, is computed using the corresponding partial derivatives and Fisher information matrix. The confidence interval is:
(27)
- Havrda-Charvát Entropy (HC): For Havrda-Charvát entropy, the variance
is obtained similarly, and the confidence interval is:
(28)
- Arimoto Entropy (A): The Arimoto entropy variance
is derived, leading to the confidence interval:
(29)
- Tsallis Entropy (T): Finally, for Tsallis entropy, the variance
and the confidence interval are computed as:
(30)
These confidence intervals provide a rigorous means to quantify the uncertainty associated with the entropy estimators under the progressively Type-II censoring scheme. For detailed mathematical derivations and the specific forms of the second derivatives used in the Fisher information matrix, refer to Appendix A.
4 Simulation study
This simulation study is designed to rigorously evaluate the performance of maximum likelihood estimators for the five proposed entropy measures. These estimators are derived from various sets of progressive Type-II censored samples generated from the Burr XII distribution, following the methodology described by [32], is the progressive Type-II censored sample of size m from the Burr XII distribution with parameters
and
.
The maximum likelihood estimates of the parameters α and β are obtained using Eqs 19 and 20. These estimates are then employed to calculate the five entropy measures: Rényi, Shannon, Havrda-Charvát, Arimoto, and Tsallis. To assess the performance of these estimators, we evaluate their absolute bias (Bias), asymptotic variance (Var), and the properties of their 95 The simulation study consists of 1000 replications. For the Burr XII distribution, two sets of parameter values, and
, are considered. Three censoring schemes (Cs) are used in the simulation:
- Scheme 1 (Sc1):
, and R1 = n−2m + 1,
- Scheme 2 (Sc2):
, and R1 = n−3m + 2,
- Scheme 3 (Sc3):
, and R1 = n−m.
The sample size used is n = 100, with effective sample size . The analysis is conducted for four values of the parameter p, specifically
.
Results of the simulation are summarized in Tables 1–4. Due to space limitations, only some of the simulation results are presented. The remaining results exhibit similar patterns.
4.1 Data analysis and comparison study
It is evident from Tables 1–4 that the performance of five entropy estimators consistently decreases as the effective sample size m increases across all schemes, particularly for the Renyi and Havrda-Charvát measures. The Tsallis and Arimoto estimators also demonstrate enhanced Bias reduction with increasing m. However, the Tsallis estimator displays higher Bias at smaller m and smaller p values, rendering it less dependable under those conditions.
The variance (Var) and confidence interval length (L) decrease as m increases. The Renyi estimator generally has the smallest variance for different values of p, followed closely by Havrda-Charvát and Arimoto. Tsallis shows higher variance, especially at smaller m and smaller p, but improves as m increases. Shannon is unaffected by p changes and consistently shows small Bias, variance, and interval length across all schemes.
Regarding coverage probability (Cov), most estimators approach the nominal level of 0.95 as the sample size (m) increases. Renyi and Havrda-Charvát consistently achieve coverage close to 0.95 across all scenarios. Meanwhile, Tsallis tends to perform poorly with smaller sample sizes and values of p. Overall, the Renyi and Havrda-Charvát estimators offer the best balance between minimal Bias, low variance, and appropriate coverage probability, making them the most reliable across different scenarios.
The analysis results suggest that when choosing an entropy measure for practical use, the sample size and parameter p must be considered. Shannon entropy is a strong choice due to its stability. However, Tsallis and Havrdát can also be reliable estimation methods when used under specific conditions (such as larger m, lower p, and a favorable scheme).
5 Real life data analysis
For the diagnosis of breast cancer, we utilized summary features from digitized images of a fine needle aspirate (FNA) of breast masses, which serve as biomarkers. This section applies the proposed entropy measures to assess the uncertainty between benign and malignant patients using data from the Wisconsin Breast Cancer Database (WBCD), created by the University of Wisconsin [34]. The dataset consists of 569 observations and 30 features, with the variable “Diagnosis” serving as the gold standard, where B = benign and M = malignant
. Among the 30 features, we selected the perimeter-worst biomarker due to its superior diagnostic performance. This biomarker demonstrates high sensitivity (0.920) and specificity (0.919), with a Youden Index of 0.839, outperforming other biomarkers in differentiating between benign and malignant cases.
The legitimacy of the Burr XII model for both benign and malignant data is assessed based on for the benign group and
for the malignant group, using Kolmogorov-Smirnov (K-S), Anderson-Darling (A-D), and chi-squared tests. The results, presented in Table 5 at a significance level of 0.05, provide strong evidence that the Burr XII model fits both datasets well.
Additionally, the fitted pdfs and Q-Q plots for the benign and malignant datasets, shown in Figs 3 and 4 (benign group) and Figs 5 and 6 (malignant group), respectively, further confirm the Burr XII distribution as a suitable model for both datasets.
In this analysis, we apply the three different censoring schemes (Sc) described in Sect 4 to estimate the entropy measures for both the benign and malignant groups. The sample sizes for the entropy estimates were m1 = 119 for the benign group and m2 = 90 for the malignant group. We calculated the entropy measures and presented the results, including MLEs, Bias, asymptotic length (L), and asymptotic variance (Var), in Table 6.
Table 6 reveals that Sc1 and Sc3 produce more precise estimates for all entropy measures, particularly in the benign group, where lower Bias and variance are observed compared to Sc2. This pattern holds across benign and malignant datasets, with the differences in Bias and variance being more pronounced in the malignant group, where Sc2 again produces less reliable estimates.
Shannon entropy presents a notable limitation in this analysis, as it produced negative MLE values for both the benign and malignant datasets. This complicates its interpretation, particularly for continuous distributions like the Burr XII, and raises questions about its suitability for this context. The negative values may indicate potential misalignment between the data and the assumed distribution model, making Shannon entropy less reliable for measuring uncertainty in this dataset.
Among the entropy measures examined, Rényi entropy emerged as the most reliable indicator of uncertainty, consistently producing positive values across all schemes. This contrasts with Shannon entropy, which yielded negative MLE values, and the Arimoto entropy, which exhibited higher Bias and variance under Sc2. Rényi entropy’s consistent performance across schemes and stability in both benign and malignant groups suggests its robustness in distinguishing between these two populations. Scheme 3 generally provided the most precise estimates, whereas Sc2 produced the least reliable outcomes across all measures.
The perimeter-worst biomarker demonstrated lower uncertainty in the benign group compared to the malignant group across all entropy measures, highlighting its predictive power in distinguishing between diseased (malignant) and non-diseased (benign) cases. This observation supports the effectiveness of the perimeter-worst biomarker as a diagnostic tool for breast cancer, particularly when paired with robust entropy measures like Rényi, which offer greater reliability in quantifying uncertainty in this context.
6 Concluding remarks
This study comprehensively evaluated five entropy measures, Shannon, Rényi, Havrda-Charvát, Arimoto, and Tsallis, under progressive Type-II censoring schemes for the Burr XII distribution. The simulation results indicated that Rényi and Havrda-Charvát consistently outperformed the other measures in terms of Bias, variance, and coverage probability, particularly as sample sizes increased. Tsallis and Arimoto improved with larger sample sizes but exhibited higher Bias and variance at smaller sample sizes.
Overall, the simulation results and the real-life example both suggest that Rényi entropy is the most robust and reliable measure of uncertainty, especially for larger sample sizes and across different censoring schemes. Shannon entropy’s stability makes it a strong choice, but its performance in terms of Bias and variance does not surpass that of Rényi or Havrda-Charvát. For practical applications, the choice of entropy estimator should consider both the sample size and the value of p, with Rényi and Havrda-Charvát offering the best overall balance between Bias, variance, and coverage probability.
Appendix A
The entries for approximate confidence intervals are given by the following equations
where, is the observed information matrix and
Shannon entropy
,
where is the digamma function and
is the first derivative of
,
is the derivative of the digamma.
where,
Rényi entropy
where,
,
,
,
Havrda and Charvát entropy
where, HN(.) is the harmonic-number, is Eulergamma. And
,
,
Arimoto entropy
Tsallis entropy
References
- 1.
Cover TM. Elements of information theory. Wiley; 1999.
- 2. Helu A, Samawi H, Alslman M. Comparing some iterative methods of parameter estimation for progressively censored Lomax data. Thailand Statistician. 2024;22(3):533–46.
- 3. Robinson DW. Entropy and uncertainty. Entropy. 2008;10(4):493–506.
- 4.
Singh VP. Entropy theory and its application in environmental and water engineering. Wiley; 2013.
- 5. Golan A. Information and entropy econometrics: a review and synthesis. Found Trends Economet. 2008;2(1–2):1–145.
- 6. Helu A, Aldabbas E, Yasin O. Adaptive type-II progressive hybrid censoring and its impact on Rayleigh data overlap estimation. Statist Optimiz Inf Comput. 2025;14(1):20–41.
- 7. Amigó JM, Balogh SG, Hernández S. A brief review of generalized entropies. Entropy (Basel). 2018;20(11):813. pmid:33266537
- 8. Namdari A, Li Z (Steven). A review of entropy measures for uncertainty quantification of stochastic processes. Adv Mech Eng. 2019;11(6):168781401985735.
- 9. Aydın S, Güdücü Ç, Kutluk F, Öniz A, Özgören M. The impact of musical experience on neural sound encoding performance. Neurosci Lett. 2019;694:124–8. pmid:30503922
- 10. Çetin FH, Barış Usta M, Aydın S, Güven AS. A case study on EEG analysis: embedding entropy estimations indicate the decreased neuro-cortical complexity levels mediated by methylphenidate treatment in children with ADHD. Clin EEG Neurosci. 2022;53(5):406–17. pmid:34923863
- 11. Samawi H, Helu A. On the inference of entropy measures under different sampling schemes. Statist Optimiz Inf Comput. 2025.
- 12. Soliman AA, Abou-Elheggag NA, Abd ellah AH, Modhesh AA. Bayesian and non-Bayesian inferences of the Burr-XII distribution for progressive first-failure censored data. Metron. 2012;70:1–25.
- 13. Qin X, Gui W. Statistical inference of Burr-XII distribution under progressive Type-II censored competing risks data with binomial removals. J Comput Appl Math. 2020;378:112922.
- 14. Lio YL, Tsai TR, Wu SJ. Acceptance sampling plans from truncated life tests based on the Birnbaum–Saunders distribution for percentiles. Commun Statist-Simulat Comput. 2009;39(1):119–36.
- 15. Lio YL, Tsai T-R, Wu S-J. Acceptance sampling plans from truncated life tests based on the Burr type XII percentiles. J Chin Inst Indust Eng. 2010;27(4):270–80.
- 16.
R’enyi A. On measures of entropy and information. In: Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability. 1961. p. 547–62.
- 17.
Principe JC, Xu D, Erdogmuns D. Renyi’s entropy, divergence and their nonparametric estimators. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. 2010. p. 47–102.
- 18. Havrda J, Charvat F. Quantification method of classification processes. Concept of structural a-entropy. Kybernetika. 1967;3(1):30–5.
- 19. Arimoto S. Information-theoretical considerations on estimation problems. Inf Control. 1971;19(3):181–94.
- 20. Tsallis C. Possible generalization of Boltzmann-Gibbs statistics. J Statist Phys. 1988;52:479–87.
- 21. Helu A. Quantifying overlap in Burr XII distribution: adaptive Type-II progressive hybrid censoring approach. Lobachevskii J Math. 2024;45(9):4111–26.
- 22. Cho Y, Sun H, Lee K. An estimation of the entropy for a Rayleigh distribution based on doubly-generalized Type-II hybrid censored samples. Entropy. 2014;16(7):3655–69.
- 23. Bantan RAR, Elgarhy M, Chesneau C, Jamal F. Estimation of entropy for inverse lomax distribution under multiple censored data. Entropy (Basel). 2020;22(6):601. pmid:33286373
- 24. Dong G, Shakhatreh MK, He D. Bayesian analysis for the Shannon entropy of the Lomax distribution using noninformative priors. J Statist Comput Simulat. 2023;94(6):1317–38.
- 25. Ren H, Gong Q, Hu X. Estimation of entropy for generalized Rayleigh distribution under progressively type-II censored samples. Axioms. 2023;12(8):776.
- 26. Wang X, Gui W. Bayesian estimation of entropy for burr Type XII distribution under progressive Type-II censored data. Mathematics. 2021;9(4):313.
- 27. Shrahli M, El-Saeed AR, Hassan AS, Elbatal I, Elgarhy M. Estimation of entropy for log-logistic distribution under progressive Type II censoring. J Nanomaterials. 2022;2022(1):2739606.
- 28. Yu J, Gui W, Shan Y. Statistical inference on the shannon entropy of inverse weibull distribution under the progressive first-failure censoring. Entropy. 2019;21(12):1209.
- 29. Kumar K, Kumar I, Ng HKT. On estimation of Shannon’s entropy of Maxwell distribution based on progressively first-failure censored data. Stats. 2024;7(1):138–59.
- 30. Al-Hussaini EK, Jaheen ZF. Bayesian estimation of the parameters, reliability and failure rate functions of the Burr type XII failure model. J Statist Comput Simulat. 1992;41(1–2):31–40.
- 31. Cramer E, Bagh C. Minimum and maximum information censoring plans in progressive censoring. Commun Statist - Theory Methods. 2011;40(14):2511–27.
- 32. Balakrishnan N, Cramer E. The art of progressive censoring. Statist Indust Technol. 2014;138.
- 33.
Casella G, Berger R. Statistical inference. CRC Press; 2024.
- 34.
Sigmon D, Fatima S. Fine needle aspiration. Treasure Island (FL): StatPearls Publishing; 2022.