Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A 3-Component Mixture of Rayleigh Distributions: Properties and Estimation in Bayesian Framework

  • Muhammad Aslam,

    Affiliation A Department of Basic Sciences, Riphah International University, Islamabad, 44000, Pakistan

  • Muhammad Tahir ,

    tahirqaustat@yahoo.com

    Current address: Department of Statistics, Government College University, 38000, Faisalabad, Pakistan

    Affiliation Department of Statistics, Quaid-i-Azam University, 45320, Islamabad, 44000, Pakistan

  • Zawar Hussain,

    Affiliation Department of Statistics, Quaid-i-Azam University, 45320, Islamabad, 44000, Pakistan

  • Bander Al-Zahrani

    Affiliation Department of Statistics, King Abdulaziz University, 21589, Jeddah, 80203, Saudi Arabia

A 3-Component Mixture of Rayleigh Distributions: Properties and Estimation in Bayesian Framework

  • Muhammad Aslam, 
  • Muhammad Tahir, 
  • Zawar Hussain, 
  • Bander Al-Zahrani
PLOS
x

Abstract

To study lifetimes of certain engineering processes, a lifetime model which can accommodate the nature of such processes is desired. The mixture models of underlying lifetime distributions are intuitively more appropriate and appealing to model the heterogeneous nature of process as compared to simple models. This paper is about studying a 3-component mixture of the Rayleigh distributionsin Bayesian perspective. The censored sampling environment is considered due to its popularity in reliability theory and survival analysis. The expressions for the Bayes estimators and their posterior risks are derived under different scenarios. In case the case that no or little prior information is available, elicitation of hyperparameters is given. To examine, numerically, the performance of the Bayes estimators using non-informative and informative priors under different loss functions, we have simulated their statistical properties for different sample sizes and test termination times. In addition, to highlight the practical significance, an illustrative example based on a real-life engineering data is also given.

Introduction

The Rayleigh distribution has many real life applications in testing lifetime of an object whose lifetime depends upon its age. The Rayleigh distribution is often used in different fields of physics to model processes such as wave heights (Rattanapitikon [1] and Van Vledder et al. [2]), sound and light radiation (Siddiqui [3]), radio signals and wind power (Ahmed and Mahammed [4]), ultrasound image modeling (Chivers [5] and Burekhardt [6]) etc. It is also used to model lifetime in hours of tubes, resistors, networks, crystals, knobs, transformers, relays and capacitors in aircraft radar sets. The Rayleigh distribution is used to study the wind speeds over a year at wind turbine sites and the daily average wind speed. In all of above mentioned applications, it is not uncommon to assume that life of particular equipment does depend upon its age. On the other hand, this distribution has got valuable attention in the field of reliability theory and survival analysis, probability theory and operations research. Thus, to model the age dependent lifetimes of devices/ equipments, the Rayleigh distribution may be a suitable candidate distribution.

When the data are given only from overall mixture distributions then modeling these data as a mixture of some component distributions is known as direct application of the mixture models. Li [7] and Li and Sedransk [8, 9] discussed different features of two types of mixture models. If the component distributions of a mixture belong to same family, their mixture is known as a type-I mixture model. Otherwise, it is named as a type-II mixture model.Mixture models have been successfully applied in many areas such as engineering, physical sciences, chemical sciences, biological sciences, etc. To understand the need of using mixture models, imagine a practical situation of modeling lifetimes of certain electrical elements where the population of lifetimes may be divided into a number of components depending upon the possible reasons of failure. Several authors have used mixture modeling in different practical problems. For example, Harris [10] fitted mixture distributions to model the crime and justice data, Kanji [11] described wind shear data using mixture distributions, Jones and McLachlan [12] applied the mixture of normal and Laplace distributions to wind shear data.

Most of the researchers worked on the classical and the Bayesian analysis of 2-component mixture models. McCullagh [13] derived some conditions under which quadratic and polynomial Exponential models can be generated as mixtures of Exponential models. Sinha [14] used the Bayesian counterpart of the maximum likelihood estimates of the 2-component mixture model considered by Mendenhall and Hader [15]. Hebert and Scariano [16] compared the location estimators for Exponential mixtures under Pitman’s measure of closeness. Sultan et al. [17] investigated the properties of the 2-component mixture of inverse Weibull distributions. Saleem and Aslam [18] discussed the use of the informative and the non-informative priors for Bayesian analysis of the 2-component mixture of Rayleigh distributions. Also, Saleem et al.[19] presented the Bayesian analysis of the 2-component mixture of the Power distributions using the complete and censored sample. Kazmi et al. [20] described the Bayesian analysis for the 2-component mixture of Maxwell distributions.

In daily life, many types of data including simple data, grouped data, truncated data, censored data and progressively censored data are encountered. Censoring is an important and valuable aspect of the lifetime data. Censoring is a form of primary quality and missing life time data problems. A valuable account of censoring is given in Romeu [21], Gijbels [22] and Kalbfleisch and Prentice [23].

Motivated by above mentioned applications of mixture of Rayleigh distributions, we plan to have Bayesian analysis of a 3-component mixture of Rayleigh distributions with unknown mixing proportions. The parameters of component distributions are assumed to be unknown. Four different priors and three different loss functions are used for Bayesian analysis. In addition, we assume an ordinary type-I right censored sampling scheme.

The rest of the paper is organized as follows: The 3-component mixture of Rayleigh distributions is defined in Section 2. The expressions for posterior distributions using the non-informative and the informative priors are derived in Section 3. The elicitation ofhyperparameters, if unknown, is given in Section 4. In Section 5, the Bayes estimators and posterior risks using the uniform, the Jeffreys’, the inverted chi-square and the square root inverted gamma priors under squared error loss function (SELF), precautionary loss function (PLF) and DeGroot loss function (DLF) are presented. The limiting expressions of the Bayes estimators and their posterior risksare derived in Section 6. The simulation study and the real data application arepresented in Sections7 and 8, respectively.Finally, the conclusion of this study is given in Section 9.

3-Component mixture of the Rayleigh distributions

The probability density function (p.d.f.) and the cumulative distribution function (c.d.f.) of the Rayleigh distribution for a random variable Y are given by: (1) (2) where λm is the parameter ofthe Rayleigh distribution.

A finite 3-component mixture model with the unknown mixing proportions p1 and p2 is defined as: (3) (4)

For different values of component and mixing proportion parameters, the behavior of a 3-component mixture of the Rayleigh distributions is depicted in the Fig 1.

thumbnail
Fig 1. Graphs of 3-component mixture of the Rayleigh distributions for different values of parameters.

https://doi.org/10.1371/journal.pone.0126183.g001

The cumulative distribution function of 3-component mixture of the Rayleigh distributions is given by: (5) (6)

The posterior distribution using the non-informative and the informative priors

In this section, likelihood and posterior distributions of parameters given data, say y, are derived using the non-informative (uniform and Jeffreys’) and the informative (inverted chi-square and square root inverted gamma) priors.

3.1 The likelihood function

Suppose n units from the 3-component mixture of Rayleigh distributionsare used in a life testing experiment with fixed test termination time t. Let the experiment be performed and it is observed that r out of n units failed until fixed test termination time t and the remaining nr units are still working. It is to be noted that out of r failures, r1, r2 and r3 failures can be categorized as belong to subpopulation-I, subpopulation-II and subpopulation-III, respectively, depending upon the reason of failure. So, the number of uncensored observations is r = r1+r2+r3. The remaining nr observations are the censored observations. Now we define ylk, 0 < ylkt, be the failure time of the kth unit belonging to the lth subpopulation, where l = 1, 2, 3 and k = 1, 2,⋯, rl. For a 3-component mixture model, the likelihood functioncan be written as: (7)

After simplification (see S1 File), the likelihood function of 3-component mixture of Rayleigh distributions is given by: (8) where are the observed failure times for the uncensored observations and ϕ = (λ1, λ2, λ3, p1, p2).

3.2 The posterior distribution usingthe uniform prior

The most common non-informative priors are the uniform prior (UP) and the Jeffreys’ prior (JP). Bayes [24], de Laplace [25] and Geisser [26] proposed that one may take the UP for the unknown parameters of interest.We assume the improper UP (which is proportional to a constant) for the component parameters λ1, λ2 and λ3, i.e., λ1Uniform(0,∞), λ2Uniform(0,∞) and λ3Uniform(0,∞). The UP over the interval (0,1) is assumed for the proportion parameters p1 and p2, i.e., p1Uniform(0,1) and p2Uniform(0,1). Assuming the independence of parameters, the joint prior distribution of parameters λ1, λ2, λ3, p1 and p2 may be written as: (9)

The joint posterior distribution of parameters λ1, λ2, λ3, p1 and p2 given data y, using the UP is given by (see S1 File): (10) (11) where , , , , , , A01 = nri+r1+1, B01 = ij+r2+1, C01 = j+r3+1, .

3.3 The posterior distribution usingthe Jeffreys’ prior

According to Jeffreys [27, 28], Bernardo [29] and Berger [30], the Jeffreys’ prior (JP) for λm (m = 1, 2, 3) is defined as , where is the Fisher’s information matrix. It is interesting to note that the JP for proportion parameters p1 and p2 cannot be assumed under the current settings. Therefore, again, the uniform distribution over the interval (0,1) is assumed for both the p1 and p2, i.e., p1 ∼ (0,1) and p2 ∼ (0,1). Under the assumption of independence of all the parameters, the joint prior distribution of parameters λ1, λ2, λ3, p1 and p2 is given by: (12)

Now, the joint posterior distribution of parameters λ1, λ2, λ3, p1 and p2 given data y, is given by (see S1 File): (13) (14) where A12 = r1, A22 = r2, A32 = r3, , , , A02 = nri+r1+1, B02 = ij+r2+1, C02 = j+r3+1, .

3.4 The posterior distribution using the inverted chi-square prior

As an informative prior, we take inverted chi-square prior (ICP) for component parameters λ1, λ2, λ3 and bivariate beta prior for proportion parameters p1, p2. Symbolically, it can be written as: λ1IC(a1,b1), λ2IC(a2,b2), λ3IC(a3,b3), and p1, p2Bivariate Beta(a,b,c). Again, assuming the independence of parameters, the joint prior distribution of parameters λ1, λ2, λ3, p1 and p2 is given by: (15)

The joint posterior distribution of parameters λ1, λ2, λ3, p1 and p2 given data y is given by (see S1 File): (16) (17) where , , , , , , A03 = nri+r1+a, B03 = ij+r2+b, C03 = j+r3+c, .

3.5 The posterior distribution using the square root inverted gamma prior

Now, we assume the square root inverted gamma prior (SRIGP)as an informative prior for component parameters λ1, λ2, λ3, i.e., λ1SRIG(a1,b1), λ2SRIG(a2,b2) and λ3SRIG(a3,b3), and abivariate beta prioras an informative prior for proportion parameters p1, p2, i.e., p1, p2Bivariate Beta(a,b,c). So, assuming the independence of parameters, the joint prior distribution of parameters λ1, λ2, λ3, p1 and p2 is given by: (18)

In this case, the joint posterior distribution of parameters λ1, λ2, λ3, p1 and p2 given data y is given by (see S1 File): (19) (20) where A14 = r1+a1, A24 = r2+a2, A34 = r3+a3, , , , A04 = nri+r1+a, B04 = ij+r2+b, C04 = j+r3+c, .

Elicitation of hyperparameters

Elicitation is a tool used to quantify a person’s prior belief and knowledge. In Bayesian perspective, elicitation most often arises as a method of specifying the prior distribution of the random parameter(s). Elicitation is simply the quantification of prior knowledge about the random parameter(s) so that this can then be combined with the likelihood to obtain posterior distribution for further statistical analysis. Elicitation has remained a challenging problem for the statistician.Authors who have discussed this problem include Kadane et al. [31], Gavasakar [32], Al-Awadhi and Gartwaite [33], Aslam [34], Hahn [35] and Saleem and Aslam [18]. In this study, we adopted prior predictive method based on predictive probabilities given by Aslam [34].

4.1 Elicitation of hyperparameters using the ICP

For eliciting the hyperparameters, prior predictive distribution (PPD) is used. The PPD using the ICP for a random variable Y is defined as: (21)

On substituting (4) and (15) in (21) and then simplifying, we get: (22)

Using the prior predictive distribution given in (22), we consider nine intervals (0, 0.5), (0.5, 1), (1, 1.5), (1.5, 2), (2, 2.5), (2.5, 3), (3, 3.5), (3.5, 4) and (4, 4.5) with respective probabilities 0.12, 0.26, 0.24, 0.15, 0.10, 0.05, 0.03, 0.02 and 0.01 as an expert’s belief about these intervals.

Using (22), following nine equations in (23) are solved simultaneously in Mathematica package for eliciting the hyperparameters a1, b1, a2, b2, a3, b3, a, b and c. (23)

The elicited values of the hyperparameters a1, b1, a2, b2, a3, b3, a, b and c are obtained as 5.88796, 5.67093, 5.4940, 5.28366, 4.90644, 4.68736, 3.46665, 4.68959 and 4.30064, respectively.

4.2 Elicitation of hyperparameters using the SRIGP

The PPD using SRIGP for a random variable Y is given by: (24)

Using (4), (18) and (24), we get: (25)

Through the above criteria as defined in Subsection 4.1, the values of the hyperparameters a1, b1, a2, b2, a3, b3, a, b and c are now obtained as 5.74419, 4.97886, 5.65643, 5.43122, 4.93333, 4.93038, 11.8838, 6.41829 and 7.0491, respectively.

Bayes estimators and posterior risks using the UP, the JP, the ICPand the SRIGPunder SELF, PLF and DLF

If is a Bayes estimator then is called posterior risk and is defined as: . Our purpose, in this study, is to look for efficient Bayes estimators of the different parameters. For this purpose, three different loss functions, namely, SELF, PLF and DLF are used to obtain the Bayes estimators and their posterior risks. The SELF, defined as L(λ,d) = (λd)2, was introduced by Legendre [36] to develop the least square theory. Norstrom [37] discussed an asymmetric PLF and also introduced a special case of general class of PLFs, which is defined as . The PLF approaches infinitely close to the origin to avert underestimation, so yielding conventional estimators when underestimation may lead to grave results. The DLF is presented by DeGroot [38] and is defined as .

For a given prior, the Bayes estimator and posterior risk under SELF are calculated as: and , respectively. Similarly, the Bayes estimators and posterior risks with PLF and DLF are given by: , , and , , respectively. The Bayes estimators and posterior risks using the UP, the JP, the ICP and the SRIGP for the parameters λ1, λ2, λ3, p1 and p2 under SELF, PLF and DLF are obtained as: (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) where v = 1 for the UP, v = 2 for the JP, v = 3 for the ICP and v = 4 for the SRIGP. The Bayes estimators and posterior risks using the UP, the JP, the ICP and the SRIGP under PLF and DLF can also be derived in similar way and are presented as supporting information in S1 File.

Limiting expressions

When test termination time t→∞, uncensored observations r tends to sample size n and rl tends to nl, l = 1,2,3. Consequently, all the observations which are censored become uncensored and the information contained in the sample is increased. As a result, the posterior risks of the Bayes estimatorsdiminish and efficiency of the Bayes estimators is increased because all the observations are incorporated in sample. The limiting expressions for the Bayes estimators and posterior risks using the UP, the JP, the ICP and the SRIGP under SELF are given in Tables A-D in S2 File. The limiting expressions for the Bayes estimators and posterior risks using the UP, the JP, the ICP and the SRIGP under PLF and DLF can also be derived in similar way. These limiting expressions can be used in case of uncensored sampling schemes.

Simulation study

To know the performance of Bayes estimatorsunder different priors, loss functions, sample sizes and test termination times. Samples of sizes, n = 50, 100, 200, 500 are generated from a 3-component mixture of Rayleigh distributions with different set of parametric values λ1, λ2, λ3, p1 and p2 fixed as (λ1, λ2, λ3, p1, p2) = {(14, 12, 10, 0.5, 0.3), (16, 14, 12, 0.5, 0.3), (11, 13, 15, 0.3, 0.5)}.

For a fixed sample size, test termination time and set of parameters, the p1n (p2n,(1 − p1p2)n) observations are randomly taken from first (second, third) component density.The observations which are greater than a fixed t are declared as censored observations. For each t, only failures are identified either as member of subpopulation-I or subpopulation-II or subpopulation-III…Based on such sample, the Bayes estimates (BEs) and posterior risks (PRs)are computed using the UP, the JP, the CIP and the SRIGP under SELF, PLF and DLF. In order to evaluate the impact of test termination time on Bayes estimators, the type-I right censoring scheme is used for fixed test termination times t = 25 and 30. All the above procedure is repeated 1000 times using Mathematica software. The results are then averaged over the 1000 samples and are arranged in S1S12 Tables.

From S1S12 Tables, it can be seen that the extent of over-estimation (under-estimation) of thecomponent and proportion parameters (through Bayes estimators)using all considered priors and loss functions is greaterfor small sample size (test termination time)as compared to large sample size (test termination time) at different test termination times (sample sizes).Similarly,the extent of over-estimation (under-estimation) of component and proportion parameters is lesserfor smaller values of component parameters as compared to larger values of component parameters atvaryingtest termination times and sample sizes. It is observed that difference of the BEs from assumed parameters reduce to zero with an increase in sample size for different test termination times.The same observation can be made with larger test termination time as compared to smaller test termination time for varying sample sizes.

It is observed that the PRs of Bayes estimatorsusingthe different priors and loss functions reduce with an increase in samplesize at different test termination times.For smallertest termination time, the PRs of Bayes estimators are larger than the PRs for large test termination time irrespective of the prior, loss function and sample size. Also, the PRs of Bayes estimators are smaller (larger) for smaller (larger) component parametric values for each sample size and test termination time considered in the simulation study.

As far as the problem of selecting a suitable prior is concerned, it can be seen that SRIGP emerges as the best prior amongst the different non-informative and informative priors considered in this study. On the other hand, the DLF is observed performing better than PLF and SELF for estimating component parameters, whereas, for estimating the proportion parameters, SELF is observed superior to PLF and DLF. It is to be noted that selection of best prior (loss function) for a given loss function (prior) is made based on PRs associated with it. Also, the selection of best prior and loss function does not depend on sample size and test termination time.

Real data application

The real mixture data, , are taken from Davis [39]. These data represent hours to failure of a V805 Transmitter Tube, a Transmitter Tube and a V600 Indicator Tube used in aircraft radar sets. Davis [39] showed that the data z can be modeled by a mixture of exponential distributions. The transformation of an exponential random data (z) yields the Rayleigh random data (y). This transformation allows us to use the Davis mixture data for applying the proposed Bayesian analysis. To have a type-I right censored data we fix t = 600 hours. The tests are conducted 1340 times. Thus, we have a type-I right censored data at t = 600 hours on n = 1340 radar sets. The data summary required to evaluate the BEs and PRs is given by: , , , n = 1340, r1 = 866, r2 = 337, r3 = 83, r = r1+r2+r3 = 1286, nr = 54.

The BEs and the PRs using the UP, the JP, the ICP and the SRIGP under SELF, PLF and DLF are presented in S13 Table.

From S13 Table, it is observed that the results based on the real data are compatible with simulation results.The results about the best prior and the best loss function are also the same as we have discussed in the Section 7.

Concluding remarks

In this study, we have considered the Bayesian analysis of3-componenten mixture of Rayleigh distributionsusing the non-informative (uniform and Jeffreys’) and the informative (IC and SRIG) priors under SELF, PLF and DLFto model lifetimes of objects. We conducted a comprehensive simulation and real life study to judge the relative performance of the Bayes estimators and also to deal with the problems of selecting the priors and loss functions at different sample sizes and test termination times. From simulated results, we observed that an increase in sample size or test termination time provides improved Bayes estimators. The extent of over-estimation (under-estimation) of the Bayes estimators is quite larger (smaller)for relatively smaller(larger) sample sizes (test termination times) at different test termination times (sample sizes). Furthermore, as sample size (test termination time) increases (decreases) the PRs of Bayes estimators decrease (increase) for a fixed test termination time (sample size). However, the PRs of Bayes estimators are large when component parameters are relatively larger and vice versa.Also, the DLF (SELF) is observed as a suitable choice for estimating component (proportion) parameters.Finally, we conclude that the SRIGP is more suitable prior under DLF for estimating the component parameters. In case, when SELFis used, the SRIGP is preferablepriorfor proportion parameters. Moreover, the same pattern is observed for the JP when only non-informative priors (UP and JP) are considered.

Supporting Information

S1 File. Derivation of likelihood function, posterior distributions, Bayes estimators and their risks under different priors and loss functions.

https://doi.org/10.1371/journal.pone.0126183.s001

(DOC)

S2 File. Limiting expressions for the Bayes estimators.

https://doi.org/10.1371/journal.pone.0126183.s002

(DOC)

S1 Table. The BEs and the PRs using the UP with λ1 = 14, λ2 = 12, λ3 = 10, p1 = 0.5, p2 = 0.3 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s003

(DOCX)

S2 Table. The BEs and the PRs using the JP with λ1 = 14, λ2 = 12, λ3 = 10, p1 = 0.5, p2 = 0.3 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s004

(DOCX)

S3 Table. The BEs and the PRs using the UP with λ1 = 16, λ2 = 14, λ3 = 12, p1 = 0.5, p2 = 0.3 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s005

(DOCX)

S4 Table. The BEs and the PRs using the JP with λ1 = 16, λ2 = 14, λ3 = 12, p1 = 0.5, p2 = 0.3 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s006

(DOCX)

S5 Table. The BEs and the PRs using the ICP with λ1 = 14, λ2 = 12, λ3 = 10, p1 = 0.5, p2 = 0.3 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s007

(DOCX)

S6 Table. The BEs and the PRs using the SRIGP with λ1 = 14, λ2 = 12, λ3 = 10, p1 = 0.5, p2 = 0.3 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s008

(DOCX)

S7 Table. The BEs and the PRs using the ICP with λ1 = 16, λ2 = 14, λ3 = 12, p1 = 0.5, p2 = 0.3 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s009

(DOCX)

S8 Table. The BEs and the PRs using the SRIGP with λ1 = 16, λ2 = 14, λ3 = 12, p1 = 0.5, p2 = 0.3 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s010

(DOCX)

S9 Table. The BEs and the PRs using the UP with λ1 = 11, λ2 = 13, λ3 = 15, p1 = 0.3, p2 = 0.5 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s011

(DOCX)

S10 Table. The BEs and the PRs using the JP with λ1 = 11, λ2 = 13, λ3 = 15, p1 = 0.3, p2 = 0.5 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s012

(DOCX)

S11 Table. The BEs and the PRs using the ICP with λ1 = 11, λ2 = 13, λ3 = 15, p1 = 0.3, p2 = 0.5 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s013

(DOCX)

S12 Table. The BEs and the PRs using the SRIGP with λ1 = 11, λ2 = 13, λ3 = 15, p1 = 0.3, p2 = 0.5 and t = 25, 30.

https://doi.org/10.1371/journal.pone.0126183.s014

(DOCX)

S13 Table. The BEs and the PRs using the UP, the JP, the ICP and the SRIGP under SELF, PLF and DLF.

https://doi.org/10.1371/journal.pone.0126183.s015

(DOCX)

Author Contributions

Conceived and designed the experiments: MT MA. Performed the experiments: MT ZH MA. Analyzed the data: MA BAZ. Contributed reagents/materials/analysis tools: MT MA BAZ. Wrote the paper: ZH MT MA BAZ.

References

  1. 1. Rattanapitikon W. Verification of conversion formulas for computing representative wave heights. Ocean Engineering. 2010; 37: 1554–1563.
  2. 2. Van Vledder GP, Ruessink G, Rijnsdorp DP. Individual wave height distribution is the coastal zone: measurements and simulations and the effect of directional spreading. Coastal Dynamics. 2013: 1799–1810.
  3. 3. Siddiqui MM. Some problems connected with Rayleigh distributions. The Journal of Research of the National Bureau of Standards. 1962; 60D: 167–174.
  4. 4. Ahmed SA, Mahammed HO. A statistical analysis of wind power density based on Weibull and Rayleigh models of “Penjwen Region” Sulaimani / Iraq. Jordan Journal of Mechanical and Industrial Engineering. 2012; 6(2): 135–140.
  5. 5. Chivers RC. The scattesing of ultrasound by human tissues, some theoretical models. Ultrasound Medical Biology. 1977; 3: 1–13.
  6. 6. Burekhardt C. Speckle in ultrasound B-mode scans. IEEE Transactions on Sonics and Ultrasonics SU. 1978; 25: 1–6.
  7. 7. Li LA. Decomposition Theorems, Conditional Probability, and Finite Mixtures Distributions. Thesis, State University, New York, Albany. 1983.
  8. 8. Li LA, Sedransk N. Inference about the Presence of a Mixture. Technical Report, State University, New York, Albany. 1982.
  9. 9. Li LA, Sedransk N. Mixtures of distributions: A topological approach. The Annals of Statistics. 1988; 16: 1623–1634.
  10. 10. Harris CM. On finite mixtures of geometric and negative binomial distributions. Communications in Statistics–Theory and Methods. 1983; 12: 987–1007.
  11. 11. Kanji KG. A mixture model for wind shear data. Journal of Applied Statistics. 1985; 12: 49–58.
  12. 12. Jones PN, McLachlan JG. Laplace-normal mixtures fitted to wind shear data. Journal of Applied Statistics. 1990; 17: 271–276.
  13. 13. McCullagh P. Exponential mixtures and quadratic Exponential families. Biometrika. 1994; 81: 721–729.
  14. 14. Sinha SK. Bayesian Estimation: New age international (P) limited, publishers, New Delhi. 1998.
  15. 15. Mendenhall W, Hader RJ. Estimation of parameters of mixed exponentially distributed failure time distributions from censored life test data. Biometrika. 1958; 45: 504–520.
  16. 16. Hebert JL, Scariano SM. Comparing location estimators for Exponential mixtures under Pitman’s measure of closeness. Communications in Statistics- Theory and Methods. 2005; 33: 29–46.
  17. 17. Sultan KS, Ismail MA, Al-Moisheer AS. Mixture of two inverse Weibull distributions: Properties and estimation. Computational Statistics & Data Analysis. 2007; 51: 5377–5387.
  18. 18. Saleem M, Aslam M. Bayesian analysis of the two-component mixture of the Rayleigh distribution assuming the uniform and the Jeffreys’ priors. Journal of Applied Statistical Science. 2008; 16: 493–502.
  19. 19. Saleem M, Aslam M, Economus P. On the Bayesian analysis of the mixture of Power distribution using the complete and censored sample. Journal of Applied Statistics. 2010; 37: 25–40.
  20. 20. Kazmi SMA, Aslam M, Ali S. On the Bayesian estimation for two-component mixture of Maxwell distribution, assuming type-I censored data. International Journal of Applied Science and Technology. 2012; 2: 197–218.
  21. 21. Romeu LJ. Censored data. Strategic Arms Reduction Treaty. 2004; 11: 1–8.
  22. 22. Gijbels I. Censored data. Wiley Interdisciplinary Reviews: Computational Statistics. 2010; 2: 178–188.
  23. 23. Kalbfleisch JD, Prentice RL. The Statistical Analysis of Failure Time Data: John Wiley & Sons Inc. New York. 2011.
  24. 24. Bayes T. An essay toward solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London. 1763; 53: 370–418.
  25. 25. DeLaplace PS. Theorie Analytique Des Probabilities: Courair, Paris. 1820.
  26. 26. Geisser S. On prior distributions for binary trials. The American Statistician. 1984; 38: 244–247.
  27. 27. Jeffreys H. An invariant form for the prior probability in estimation problems. Proceeding of the Royal Society of London, Mathematical and Physical Sciences. 1946; 186: 453–461. pmid:20998741
  28. 28. Jeffreys H. Theory of Probability: Claredon Press Oxford, UK. 1961.
  29. 29. Bernardo JM. Reference posterior distributions for Bayesian inference. Journal of Royal Statistical Society, Series B (Methodological). 1979; 41: 113–147.
  30. 30. Berger JO. Statistical Decision Theory and Bayesian Analysis: Springer-Verlag, New York. 1985
  31. 31. Kadane JB, Dickey JM, Winkler RL, Smith W, Peter SC. Interactive elicitation of opinion for a normal linear model. Journal of the American Statistical Association. 1980; 75: 845–854.
  32. 32. Gavasakar U. A comparison of two elicitation methods for a prior distribution for a binomial parameter. Management Science. 1988; 34: 784–790.
  33. 33. Al-Awadhi SA, Gartwaite PH. Prior distribution assessment for a multivariate normal distribution: an experimental study. Communication in Statistics- Theory and Methods. 1998; 27: 1123–1142.
  34. 34. Aslam M. An application of prior predictive distribution to elicit the prior density. Journal of Statistical Theory and Applications. 2003; 2: 70–83.
  35. 35. Hahn ED. Re-examining informative prior elicitation through the lens of Markov chain Monte Carlo methods. Journal of the Royal Statistical Society, Series A. 2006; 169: 37–48.
  36. 36. Legendre AM. Nouvelles Méthodes Pour La Détermination Des Orbites Des Comètes: Appendice Sur La Méthode Des Moindres Carŕes: Courcier Paris. 1806.
  37. 37. Norstrom JG. The use of precautionary loss function in risk analysis. IEEE Transactions on Reliability. 1996; 45: 400–403.
  38. 38. DeGroot MH. Optimal Statistical Decision: John Wiley & Sons Inc. New York. 2005.
  39. 39. Davis DJ. An analysis of some failure data. Journal of the American Statistical Association. 1952; 47: 113–150.