Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Bayesian and non-bayesian inference for logistic-exponential distribution using improved adaptive type-II progressively censored data

  • Subhankar Dutta,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Division of Mathematics, School of Advanced Sciences, Vellore Institute of Technology, Chennai, India

  • Hana N. Alqifari ,

    Roles Conceptualization, Methodology, Project administration, Resources, Writing – original draft, Writing – review & editing

    hn.alqifari@qu.edu.sa

    Affiliation Department of Statistics and Operation Research, College of Science, Qassim University, Buraydah, Saudi Arabia

  • Amani Almohaimeed

    Roles Conceptualization, Investigation, Methodology, Resources, Writing – original draft, Writing – review & editing

    Affiliation Department of Statistics and Operation Research, College of Science, Qassim University, Buraydah, Saudi Arabia

Abstract

Improved adaptive type-II progressive censoring schemes (IAT-II PCS) are increasingly being used to estimate parameters and reliability characteristics of lifetime distributions, leading to more accurate and reliable estimates. The logistic exponential distribution (LED), a flexible distribution with five hazard rate forms, is employed in several fields, including lifetime, financial, and environmental data. This research aims to enhance the accuracy and reliability estimation capabilities for the logistic exponential distribution under IAT-II PCS. By developing novel statistical inference methods, we can better understand the behavior of failure times, allow for more accurate decision-making, and improve the overall reliability of the model. In this research, we consider both classical and Bayesian techniques. The classical technique involves constructing maximum likelihood estimators of the model parameters and their asymptotic covariance matrix, followed by estimating the distribution’s reliability using survival and hazard functions. The delta approach is used to create estimated confidence intervals for the model parameters. In the Bayesian technique, prior information about the LED parameters is used to estimate the posterior distribution of the parameters, which is derived using Bayes’ theorem. The model’s reliability is determined by computing the posterior predictive distribution of the survival or hazard functions. Extensive simulation studies and real-data applications assess the effectiveness of the proposed methods and evaluate their performance against existing methods.

1 Introduction

Reliability analysis has important implications across several fields, encompassing engineering, health, and finance, for assuring system safety and performance. Dealing with censored data highlights one of the challenges in reliability analysis since it occurs when a component’s failure time is not fully observed, either because the component has not yet failed or because the experiment has been terminated before the component failed. Traditional censoring systems, such as Type-I, Type-II, and hybrid schemes, have been thoroughly investigated, however they lack flexibility in eliminating units at any time. To address this limitation, Cohen [1] introduced the progressive censoring scheme (PCS). In PCS, units are extracted from the experiment at different time points, with the number of units extracted at each time point specified in advance. This allows for more experiment design flexibility, improved efficiency for certain research questions, and a reduction in evaluate the duration and expenses to a specific extent in comparison to conventional CSs. PCSs have been used in a variety of research settings, including the study of product lifetimes, the reliability of engineering systems, and the survival of patients with diseases. They offer several advantages over conventional CSs, including increased flexibility and efficiency. For more details on PCS, one may refer to [25]. However, PCSs can be more complex to design and analyze, and they may not be suitable for all research questions.

The increased reliability of products resulting from technological advances in manufacturing can lengthen experimental periods in accordance with progressive type-II censoring schemes. In order to address this limitation, Kundu and Joarder [6] proposed the type-II progressive hybrid censoring scheme. Considerable research has been devoted to the examination of this particular scheme ([79]). Nevertheless, in the context of this censoring scheme, the experimental duration is fixed, resulting in a randomization of the effective sample size. There are situations in which the effective sample size can reach zero, hence diminishing the efficacy of statistical inference. In order to address this inefficiency, Ng et al. [10] introduced the adaptive type-II progressive censoring scheme (AT-II PCS), a combination of Type-II progressive censoring and Type-I censoring schemes. This scheme comes to an end once a predetermined quantity of failure-time data has been obtained. For more details on this scheme, one may refer to [1116]. Nonetheless, this experiment could take an exceptionally long time to complete if the experimental devices are exceptionally dependable. In order to address this limitation, Yan et al. [17] introduced the IAT-II PCS, which is an enhanced adaptive type-II progressive censored scheme. By modifying the AT-II PCS, this strategy circumvents the problem of lengthy experimental periods. The IAT-II PCS efficiently guarantees that tests conclude within a predetermined timeframe. Hence, the suggested IAT-II PCS can be employed when studies need to be concluded within a designated timeframe. In statistical literature, IAT-II PCS has been explored by a few researchers (see [18, 19]). There is still ample opportunity for further investigation on the topic of parameter estimation for various distributions using IAT-II PCS.

Lan and Leemis [20] introduced the logistic exponential distribution (LED). The probability density function (PDF) and cumulative distribution function (CDF) of LED are defined respectively, as (1) and (2)

Accordingly, the survival function (SF) is given by (3)

The LED displays a wide variety of hazard rate shapes, including constant, rising, decreasing, bathtub, and upside-down bathtub. The hazard rate function (HRF) is defined as follows: (4)

The plots of PDF and HRF of the LED for different values of the parameters have been depicted in Fig 1. For values of η less than 1, the hazard rate function exhibits an increasing trend. Conversely, for values of η greater than 1, the hazard rate function shows a decreasing trend. Finally, when η equals 1, the hazard rate function remains constant. Furthermore, in the case when η < 1 and ν is of small magnitude, the hazard rate function exhibits a bathtub shape. Conversely, when η > 1 and ν is of significant magnitude, it has an inverted bathtub form. It should be noted that variously shaped hazard rate functions are used with varying meanings in reliability engineering, biology, and various statistical modelings. Consider a high failure rate in infancy that rises with age, then falls to a certain extent, stays constant for a while, and then rises again. A bathtub-shaped hazard rate model may be used to depict this kind of scenario. In the early stages of a functioning system, the decreasing hazard rate is often seen. Such hazard model may suit the earthquake data set. Furthermore, since increasing hazard rate models have consequences for the assessment of certain objective functions used to describe stochastic occurrences, they are employed in research related to operations and supply chain management. In biology, it is shown throughout the course of a disease whose mortality peaks after a certain amount of time and then steadily decreases. Thus, an upside-down bathtub-shaped hazard rate model may be used to represent the associated data set. LED has all these types of hazard rate. Owing to the many forms of the hazard rate function, the LED exhibits great adaptability in several research domains, including clinical, reliability, and survival investigations. This work specifically examines censored samples using an enhanced adaptive Type-II progressive censoring method. The objective is to estimate the unknown parameters of the LED.

thumbnail
Fig 1. PDF and HRF plots of the LED for different values of ν and η.

https://doi.org/10.1371/journal.pone.0298638.g001

As far as we know, we have not seen any research on the estimate of model parameters and reliability features of the LED under IAT-II PCS. Therefore, this work aims to address and eliminate this disparity. Initially, the parameters, SF and HRF of the LED are estimated using the frequentist estimation methodology, namely the traditional maximum likelihood (MLE) method. Additionally, the asymptotic confidence intervals (ACIs) are computed the LED parameters. The second aim is to acquire the Bayesian estimate (BEs) of the unknown LED parameters using gamma priors and the accompanying greatest posterior density credible intervals. Due to the unavailability of closed equations for the BEs, Markov chain Monte Carlo (MCMC) techniques are employed to compute the intricate posterior functions. This allows for the calculation of the BEs and the corresponding highest posterior density (HPD) credible intervals. The performance of the suggested approaches is evaluated using a comprehensive Monte Carlo simulation analysis, focusing on indicators such as mean square error (MSE), average length, coverage probability, and average bias. Data obtained from real-world engineering applications is also subjected to analysis.

The structure of the paper is as follows: Section 2 introduces the adaptive type-II progressive censoring scheme (AT-II PCS). Section 3 focuses on several frequentist approaches for estimating the parameters of the LED model. The Bayes estimators are discussed in Section 4. Section 5 showcases the Monte Carlo simulation to compare the proposed estimates. A real-life data set is used in Section 6 to demonstrate the practical use of the LED using IAT-II PCS in real-world phenomena. Finally, Section 7 ends up with some concluding remarks.

2 Model formulation

IAT-II PCS emerges in a reliability investigation in the following manner: Given an assumption of a distribution and a probability density function denoted as f(x), as well as a cumulative distribution function denoted as F(x), an experiment starts with a set of n indistinguishable objects. The experiment started with a PCS , where Ri ≥ 0, and the predetermined failure number m (m < n). The results of the experiment’s performance can be used to modify the value of Ri. Simultaneously, there are two predetermined timing thresholds, denoted as T1 and T2, where (T1, T2 ∈ (0, ∞)) and T1 < T2. According to the IAT-II PCS, there are three observed cases as follows:

  1. Case I: When Xm:m:n < T1, the associated sample is denoted as X1:m:n, X2:m:n, ⋯, Xm:m:n, which refers to type-II PCS.
  2. Case II: When T1 < Xm:m:n < T2, the related sample is denoted as X1:m:n, X2:m:n, ⋯, Xm:m:n, which refers to AT-II PCS.
  3. Case III: When T2 < Xm:m:n, subsequently, the termination time becomes T2 and . The IAT-II PCS sample in this instance is represented as . The experiment conducted using the IAT-II PCS is concluded when time T* is equal to the minimum value between Xm:m:n and T2. Fig 2 illustrates the three scenarios visually.
thumbnail
Fig 2. Schematic representation of the improved adaptive type-II progressive censored scheme.

https://doi.org/10.1371/journal.pone.0298638.g002

If an experiment contains n units following a random variable X with CDF F(x; θ) and PDF f(x; θ) with a PCS . Then, according to [17], the likelihood function using IAT-II PCS is given below: (5) where C is a constant that is independent of the parameters and the quantities E2, E1, Ri, T*, and B for the various situations are shown in Table 1. The failure counts prior to T2 and T1 are denoted by E2 and E1, respectively.

thumbnail
Table 1. Diverse selections of the quantities E2, E1, C, T*, and B.

https://doi.org/10.1371/journal.pone.0298638.t001

3 Classical estimation

This section develops the classical estimates of the parameters and the reliability characteristics of the LED based on IAT-II PCS.

3.1 Maximum likelihood estimation

When X follows LED, using (1), (2) and (5), the likelihood function based on IAT-II PCS is stated as follows: (6) where xi = xi:m:n, , and (7)

The log-likelihood function, The log-likelihood l(ν, η|x) has the following derivatives of first order with regard to ν and η: (8) where and (9) where

Setting (8) and (9) equal to zero: (10) and (11)

The system of nonlinear equations represented by Eqs (10) and (11) lacks a closed-form solution due to the intractable nature of the terms involved. Consequently, numerical methods are indispensable for obtaining the ML estimates and . The Newton-Raphson method is a common numerical approach employed to find these estimates.

Substituting the MLEs and into formula (3) and (4), then the MLEs of reliability characteristics have been acquired as and

3.2 Asymptotic confidence interval

In this section, the ACIs for the model parameters and the reliability characteristics have been constructed based on the normality properties of the MLEs of the parameters.

3.2.1 Confidence intervals of parameters.

Based on the asymptotic normality of the relevant MLEs, the ACIs of ν, η, S(t), and h(t) are determined. To obtain the ACIs, the asymptotic variance-covariance matrix (VCM) has to be constructed based on the MLEs. This VCM can be obtained from the inverse of the Fisher information matrix (FIM), and the FIM is expressed below: (12)

Here, where and where

Finding the precise expressions for the aforementioned expectation (12) is challenging. Therefore, by using the observed FIM , one can easily obtain the approximate FIM as

Furthermore, based on the asymptotic normality properties of the MLEs, where

For the parameters ν and η, the corresponding 100(1 − ξ)% ACIs are given as where zξ/2 is the upper percentile of N(0, 1).

3.2.2 ACIs of S(t) and h(t).

Since the functions of the ML estimators are intractable when attempting to calculate the variance analytically, the Delta method (see [21, 22]) is utilized to evaluate the approximate confidence intervals for the survival functions S(t). The survival function is then approximated linearly, and the variance of the linear approximation is subsequently determined as follows:

let and where, represents the transpose of Gl, and

The delta method yields the following approximations for the asymptotic variances of and : and

As a result, S(t) and h(t) have the following ACIs: and

4 Bayesian estimation

The earlier sections concentrated on using frequentist methods to estimate the LED model’s parameters. However, the main focus of this section will be on estimating ν and η, as well as S(t) and h(t), of the LED using Bayesian approaches. The ability to modify the support of the LED distribution makes gamma priors more flexible than other forms of priors. Furthermore, independent gamma priors are not very complicated, thus they should not lead to any computing issues or complicated posterior expressions. Specifically, the parameters ν and η are assumed to follow independent gamma prior distributions with probability density functions G(a1, b1) and G(a2, b2), respectively, as indicated by the following equations:

Hence, the joint prior distribution is as follows:

Based on the likelihood function in (6), the joint posterior density function of ν and η is as follows, (13) By multiplying the complete data likelihood by the joint prior and reducing the result by a normalizing constant called the marginal likelihood, one can derive the Bayesian estimates of any function of ν and η, represented by ψ(ν, η), under squared error loss function (SELF) and LINEX loss function (LLF) which are defined as (14) and (15) respectively and is the estimator of δ. Then the Bayes estimates under SELF and LLF can be obtained through the following formula: (16) and (17) From the above expressions in (16) and (17), the Bayes estimates can not be obtained explicitly because of having ratio of two integrals. Estimating the marginal likelihood, represented by the term in the denominator, is a difficult issue, often involving a high-dimensional integration of the likelihood over the prior distribution. Nevertheless, Markov Chain Monte Carlo (MCMC) methods offer a solution to this problem by generating samples from the joint posterior distribution without requiring the marginal likelihood to be explicitly estimated.

4.1 MCMC method

MCMC method uses the joint posterior distribution (13) in terms of The conditional distributions for ν and η can be obtained as and

These conditional distributions can not be expressed in a known distributional form. So in this case the Metropolis-Hastings (M-H) algorithm will be used to generate the MCMC samples for the parameters and corresponding reliability characteristics. The samples have been generated by using the following algorithm:

Algorithm 1

Step 1: Set k = 1 and as initial values of ν and η.

Step 2: Generate ν(j) and η(j) with normal distribution as and .

Step 3: Compute and .

Step 4: Generate samples for ϕ1 and ϕ2 from Uniform (0, 1) distribution.

Step 5: Set

Step 6: Set k = k + 1.

Step 7: Repeat Step 2 to Step 6, P times to get {ν(1), ⋯, ν(P)} and {η(1), ⋯, η(P)}.

Step 8: Using these samples, samples of the reliability characteristics {S(1)(t), ⋯, S(P)(t)} and {h(1)(t), ⋯, h(P)(t)}.

Thus, using these MCMC samples the Bayes estimates of ν, η, S(t), and h(t) under SELF and LLF functions can be obtained as respectively, where Q is the burn-in period. Further, to construct the HPD credible intervals of (ν, η, S(t), h(t)), arrange the samples (ν(k), η(k), S(k)(t), h(k)(t)) for k = 1, ⋯, P, in an increasing order to get (ν[k], η[k], S[k](t), h[k](t)) for k = 1, ⋯, P. Then the 100(1 − ξ)% HPD credible intervals of ν, η, S(t), and h(t) are respectively given by

5 Simulation study

A meticulous Monte Carlo simulation analysis is conducted to assess the behavior of the theoretical conclusions produced in the preceding sections, covering the classical and BEs and the corresponding credible and confidence intervals. The performance of the point estimates is compared based on average bias (AB) and mean squared error (MSE) and interval estimates have been compared based on the average width (AW) of the intervals and the coverage probability (CP). To achieve this, we have generated 10, 000 IAT-II PCS samples considering ν = 1.5 and η = 1.2 using an algorithm developed by Yan et al. [17]. To estimate the reliability characteristics, set t = 0.25 for which S(t) = 0.7201 and h(t) = 1.6111. R 4.0.4, a type of programming interface, was used for all numerical calculations.

To conduct the simulation, several combinations of n, m, T1, and T2 have been taken into account with various censoring schemes. Two distinct possibilities of n and m are utilized for each given time (T1, T2) = (0.4, 0.8) and (0.55, 1.1), where n = (40, 80) and m is assumed to equal (50, 75)% for each n. The following three different PCS have been considered to remove the experimental units randomly at the time of the experiment:

  • Scheme I: R1 = (nm), Ri = 0, for i = 2, ⋯, m.
  • Scheme II: Rm = (nm), Ri = 0, for i = 1, ⋯, m − 1.
  • Scheme III: Rm/2 = (nm), Ri = 0, for i = 1, ⋯, m/2 − 1, m/2 + 1, ⋯, m.

In the classical paradigm, MLEs and the 95% ACIs are obtained. In the Bayesian framework, the values of the hyperparameters (ui, vi), for i = 1, 2 have been considered using the derivation given by Dutta and Kayal [23]. Classical estimates are better than BEs in situations when prior knowledge about the unknown parameters is lacking, as BEs need more computational expenses. To obtain Bayes estimates under LLF, we consider p = 0.5. To obtain Bayes estimates, the M-H algorithm has been used to generate MCMC samples. In this study, 10, 000 MCMC samples have been generated considering a 1, 000 burn-in period. Using these samples, the 95% HPD credible intervals are also constructed. Table 2 summarizes the ABs and MSEs of the point estimates. Whereas Table 3 represents the AWs and CPs of the interval estimates. The following are the key findings from Tables 2 and 3:

  • Effect of increasing m: For fixed values of n, T1 and T2, increasing m results in a decrease in the ABs and MSEs of the point estimates for ν, η, S(t), and h(t). Additionally, the AWs of the ACI and HPD credible intervals decrease, while the CPs increase.
  • Effect of increasing n: For fixed values of m, T1 and T2, increasing n leads to a decrease in the ABs and MSEs of the point estimates for ν, η, S(t), and h(t). Similar to the effect of increasing m, the AWs of the ACI and HPD credible intervals decrease, while the CPs increase.
  • Effect of increasing (T1, T2): For fixed values of n and m, increasing (T1, T2) results in a decrease in the ABs and MSEs of the point estimates for ν, η, S(t), and h(t). However, the AWs of the ACI and HPD credible intervals decrease, while the CPs increase.
thumbnail
Table 2. ABs and MSEs (in parenthesis) of the estimates of ν, η, S(0.25), and h(0.25) with different values of n, m, and (T1, T2).

https://doi.org/10.1371/journal.pone.0298638.t002

thumbnail
Table 3. AWs and CPs (in parenthesis) of the 95% interval estimates of ν, η, S(0.5), and h(0.5) with different values of n, m, and (T1, T2).

https://doi.org/10.1371/journal.pone.0298638.t003

These findings demonstrate that the proposed estimation methods provide robust and precise results for increasing values of n, m, and (T1, T2). The decrease in ABs and MSEs indicates improved precision of the point estimates, while the consistent CPs across varying sample sizes and censoring schemes imply that the proposed interval estimates adequately represent the true parameter values. One can easily summarize that in case of point estimation, BEs under LLF performs better than the other proposed estimates and for interval estimation, HPD credible interval outperforms the ACIs.

6 Mechanical equipment data analysis

An engineering application employing a real-life data set reported by Murthy et al. [24] is analyzed to demonstrate how approaches suggested may be tailored to real phenomena. In recent time, Elshahhat et al. [25] reanalyzed this RME data set for AT-II PCS following weighted-exponential distribution. This data set which demonstrates the intervals between failures for thirty pieces of repairable mechanical equipment (RME) is given below: Before proceeding with the analysis, it is important to determine whether the proposed model adequately fits the data. To achieve this, the MLEs of ν and η based on the real data set were determined to be 1.5864 and 0.5415, respectively. In addition, the corresponding Kolmogorov-Smirnov (K-S) distance and p-value were obtained as 0.0732 and 0.9933, respectively. To further assess the goodness-of-fit, Fig 3 displays the empirical CDF (ECDF), probability-probability (P-P), and quantile-quantile (Q-Q) plots for the complete data set. Furthermore, the Weibull distribution were examined as a potential alternative model, represented by the CDF, F(x) = 1 − exp(−(x/σ)ν), where x > 0, and ν, σ > 0 represent the shape and scale parameters, respectively. Based on RME data, the K-S distance and corresponding p-value for the Weibull distribution are 0.0748 and 0.9914, respectively. These results demonstrate that the LED fits the data set quite well.

thumbnail
Fig 3. ECDF, P-P, and Q-Q plots for the LED distribution with real data.

https://doi.org/10.1371/journal.pone.0298638.g003

Now, three IAT-II PCS samples are generated from the complete data set, with m = 14 and different choices of (R1, ⋯, Rm), T1, and T2. Table 3 reports the produced samples together with the relevant censoring schemes.

The point and interval estimates for the parameters ν and η and the reliability characteristics S(t) and h(t) at t = 0.5 have been obtained by using the generated IAT-II PCS samples which are tabulated in Table 4. To show the existence of uniqueness of the MLEs of the parameters in graphical approach, a contour plot is depicted in Fig 4. To obtain the Bayes estimates, 5, 000 MCMC samples have been generated and gamma priors have been considered with hyperparameters a = b = c = d = 0.001. All these point and interval estimates based on the RME data have been reported in Table 5. To check the convergence of the MCMC samples, trace plots are given here in Fig 5. From Table 5, it has been concluded that Bayes estimates perform better than MLEs in terms of the standard error (SE). It has been observed that HPD credible intervals perform better than ACIs with 5% significance level.

thumbnail
Fig 4. Contour plot of the log-likelihood function based on the IAT-II PCS sample ‘C’ using the RME data.

https://doi.org/10.1371/journal.pone.0298638.g004

thumbnail
Fig 5. Trcae plots of the MCMC samples of ν, η, S(t), and h(t) for IAT-II PCS sample A generated from the real data.

https://doi.org/10.1371/journal.pone.0298638.g005

thumbnail
Table 4. Three different IAT-II PCS samples generated from RME data.

https://doi.org/10.1371/journal.pone.0298638.t004

thumbnail
Table 5. Point estimates (Standard errors in parenthesis) and interval estimates (interval lengths in parenthesis) for these three IAT-II PCS samples generated from RME data.

https://doi.org/10.1371/journal.pone.0298638.t005

7 Conclusions

This article examines the reliability, hazard rate functions, and unknown parameter estimation issues of the logistic-exponential distribution using an IAT-II PCS. The maximum likelihood estimates and the associated asymptotic confidence intervals have been obtained. The Bayes estimates are derived based on gamma priors and the corresponding highest posterior density credible intervals are also constructed. To obtain the Bayes estimates, the MCMC technique has been employed. A Monte Carlo simulation study is carried out under various circumstances to compare the behavior of the different estimations. In terms of average length, coverage probability, average bias, and mean square error, the Bayesian estimates outperformed the classical estimates. Eventually, a real-life data set has been analyzed to demonstrate the suitability of the suggested techniques. In case of simulation studies, when we choose small number of experimental units (less than 10), the results are not found as usual. This type of limitation has been found in this study. As a future direction, one may consider the various approaches covered in the study in the context of accelerated life testing model for different lifetime distributions would be fascinating. The work is ongoing and will be reported at a later time.

Acknowledgments

Researchers would like to thank the Deanship of Scientific Research, Qassim University for funding publication of this project.

References

  1. 1. Cohen AC. Progressively censored samples in life testing. Technometrics. 1963; 5(3): 327–339.
  2. 2. Balakrishnan N, Kannan N, Lin CT, Ng HT. Point and interval estimation for Gaussian distribution, based on progressively Type-II censored samples. IEEE Transactions on Reliability. 2003; 52(1): 90–95.
  3. 3. Maiti K, Kayal S. Estimation of parameters and reliability characteristics for a generalized Rayleigh distribution under progressive type-II censored sample. Communications in Statistics—Simulation and Computation. 2021; 50(11): 3669–3698.
  4. 4. Elshahhat A, Dutta S, Abo-Kasem OE, Mohammed HS. Statistical analysis of the Gompertz-Makeham model using adaptive progressively hybrid Type-II censoring and its applications in various sciences. Journal of Radiation Research and Applied Sciences. 2023;16(4):100644.
  5. 5. Maiti K, Kayal S. Estimating reliability characteristics of the log-logistic distribution under progressive censoring with two applications. Annals of Data Science. 2023; 10(1): 89–128.
  6. 6. Kundu D, Joarder A. Analysis of Type-II progressively hybrid censored data. Computational Statistics Data Analysis. 2006; 50(10): 2509–2528.
  7. 7. Lin CT, Ng HKT, Chan PS. Statistical inference of Type-II progressively hybrid censored data with Weibull lifetimes. Communications in Statistics—Theory and Methods. 2009; 38(10): 1710–1729.
  8. 8. Lin CT, Chou CC, Huang YL. Inference for the Weibull distribution with progressive hybrid censoring. Computational Statistics Data Analysis. 2012; 56(3): 451–467.
  9. 9. Dutta S, Kayal S. Estimation of parameters of the logistic exponential distribution under progressive type-I hybrid censored sample. Quality Technology Quantitative Management. 2022;19(2):234–258.
  10. 10. Ng HKT, Kundu D, Chan PS. Statistical analysis of exponential lifetimes under an adaptive Type-II progressive censoring scheme. Naval Research Logistics (NRL). 2009; 56(8): 687–698.
  11. 11. Dutta S, Kayal S. Estimation of parameters of the Gumbel type-II distribution under AT-II PHCS with an application of Covid-19 data. arXiv preprint arXiv:210308641. 2021.
  12. 12. Haj Ahmad H, Salah MM, Eliwa M, Ali Alhussain Z, Almetwally EM, Ahmed EA. Bayesian and non-Bayesian inference under adaptive type-II progressive censored sample with exponentiated power Lindley distribution. Journal of Applied Statistics. 2022; 49(12): 2981–3001. pmid:36035610
  13. 13. Alotaibi R, Almetwally EM, Hai Q, Rezk H. Optimal Test Plan of Step Stress Partially Accelerated Life Testing for Alpha Power Inverse Weibull Distribution under Adaptive Progressive Hybrid Censored Data and Different Loss Functions. Mathematics. 2022; 10(24): 4652.
  14. 14. Almetwally E, Almongy H, Rastogi M, Ibrahim M. Maximum product spacing estimation of Weibull distribution under adaptive type-II progressive censoring schemes. Annals of Data Science. 2020; 7: 257–279.
  15. 15. Alrumayh A, Weera W, Khogeer HA, Almetwally EM. Optimal analysis of adaptive type-II progressive censored for new unit-lindley model. Journal of King Saud University-Science. 2023; 35(2): 102462.
  16. 16. Dutta S, Dey S, Kayal S. Bayesian survival analysis of logistic exponential distribution for adaptive progressive Type-II censored data. Computational Statistics. 2023; 1–47.
  17. 17. Yan W, Li P, Yu Y. Statistical inference for the reliability of Burr-XII distribution under improved adaptive Type-II progressive censoring. Applied Mathematical Modelling. 2021; 95: 38–52.
  18. 18. Nassar M, Elshahhat A. Estimation procedures and optimal censoring schemes for an improved adaptive progressively type-II censored Weibull distribution. Journal of Applied Statistics. 2023; 1–25.
  19. 19. Dutta S, Kayal S. Inference of a competing risks model with partially observed failure causes under improved adaptive type-II progressive censoring. Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability. 2023; 237(4): 765–780.
  20. 20. Lan Y, Leemis LM. The logistic–exponential survival distribution. Naval Research Logistics. 2008; 55(3): 252–264.
  21. 21. Greene WH. Econometric analysis. Pearson Education India; 2003.
  22. 22. Agresti A. Categorical data analysis. vol. 792. John Wiley Sons; 2012.
  23. 23. Dutta S, Kayal S. Estimation and prediction for Burr type III distribution based on unified progressive hybrid censoring scheme. Journal of Applied Statistics. 2024; 51(1): 1–33. pmid:38179163
  24. 24. Murthy DP, Xie M, Jiang R. Weibull models. John Wiley Sons; 2004.
  25. 25. Elshahhat A, Almetwally EM, Dey S, Mohammed HS. Analysis of WE parameters of life using adaptive-progressively Type-II hybrid censored mechanical equipment data. Axioms. 2023; 12(7): 690.