Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Comparing the variances of several treatments with that of a control treatment: Theory and applications

  • Jingsen Kong,

    Roles Conceptualization, Data curation, Investigation, Resources, Software, Validation, Visualization, Writing – original draft

    Affiliation School of Economics, Jinan University, Guangzhou, China

  • Hezhi Lu

    Roles Formal analysis, Funding acquisition, Methodology, Project administration, Supervision, Writing – review & editing

    2497190659@qq.com

    Affiliations School of Economics and Statistics, Guangzhou University, Guangzhou, China, Lingnan Research Academy of Statistical Science, Guangzhou University, Guangzhou, China

Abstract

A common and important problem in medicine, economics and environmental studies is the comparison of the variances of several treatments with that of a control treatment. Among the existing methods, Spurrier’s optimal test based on multivariate F distribution has exact type I error rates. However, it requires equal sample sizes among the treatment groups. To extend the application scope, in this paper, we propose a new efficient test for comparing several variances with a control using the marginal inferential model (MIM). Simulation studies show that the MIM test guarantees the exact type I error rate whether the sample size is equal or unequal. Moreover, the power of the MIM test is competitive with that of Spurrier’s optimal test. Finally, two real examples are used to demonstrate the application of the proposed method.

Introduction

In medicine, economics and environmental studies, there are many situations wherein k independent populations are compared to a control population with respect to scale parameters [1]. For example, in medicine studies, the variability of testosterone levels in men in various groups classified according to their smoking habits is compared to the variability of testosterone levels in healthy people. [2] indicated that smoking has a negative impact on testosterone levels, leading to less variability, as some of them may have low testosterone levels for other reasons, while healthy persons will have high levels. In this situation, a common problem is whether different types of smokers, e.g., former smokers, light smokers or heavy smokers, have less testosterone variability than nonsmokers. Thus, in the case of k treatment populations π1, π2, …, πk and a control population π0, where the observations from the ith population follow the normal distribution , the interest is to test the hypothesis (1) where is the variance of the ith population.

Many studies focus on multiple comparisons of treatments with a control or standard with respect to a parameter of interest under order restrictions. [3, 4] proposed a standard multiple comparison procedure for comparing several treatments with a control. [5] constructed confidence intervals for comparing several normal variances with a control variance in multifactor experiments. [6] provided an algorithm for constructing multiple hypothesis testing. To improve the mean half-square successive difference statistic [7, 8] proposed a modified percentile bootstrap method for comparing the variances of two independent groups. Moreover, a combination of Levene-type tests with a finite-intersection method for testing the equality of variances against ordered alternatives can be found in [9]. [10] discussed the quality of F-ratio resampling tests for comparing variances. Note that all the test methods mentioned above for comparing variances have been found to be unsatisfactory in terms of type I error probabilities or powers.

To obtain an optimal test procedure that has exact control behavior in the type I error rate [11, 12], recommended the use of sample quasi ranges as a measure of variance. The distribution of quasi ranges in samples from a normal population was discussed by [13]. [14] classified normal populations with respect to a control using sample quasi ranges on censored data. [15] discussed optimal designs for comparing several experimental treatment variances with that of a standard treatment variance. A one-sided test based on sample quasirange was proposed by [16] to test homogeneity against a simple ordered alternative. [17] proposed a test based on isotonic estimators for testing the equality of variances of several normal populations against tree-ordered alternatives. Moreover [1], proposed an upper one-sided test based on the sample quasi range to test the homogeneity of variances from the normal population with that of the control population. By computing exact critical constants, the sample quasirange method can control the type I error rate at a preassigned level, α.

The sample quasirange approach aims at provably efficient inference, and the corresponding test can guarantee the exact type I error rate. Moreover, different from other test methods, Spurrier’s optimal test [15] is a single step test procedure, while other test methods based on sample quasi ranges can be regarded as a step-up test procedure for multiple hypothesis testing. Moreover, some studies consider testing whether the variances of k populations are not equal to the variance of a control population. However, under certain circumstances, one-sided simultaneous confidence intervals provide more inferential sensitivity than two-sided simultaneous confidence intervals [18]. For example, some upper one-sided tests for comparing several normal variances with a control variance can be found in [1, 15, 1719].

For lower one-sided tests [15], provided the optimal test procedure for hypothesis test (1). More specifically, suppose that n1 = n, n1 = n2 = … = nk = m, denote the sample variance for treatment by . Define the random variables (2) and the test statistics (3)

The distribution of (F1, …, Fk) is a multivariate F distribution and the marginal distribution of Fi is the F distribution with m − 1 and n − 1 degrees of freedom, i = 1,2, …, k. Letting F(1) = min⁡(F1, …, Fk), the p-value of the Spurrier test is given by (4) where is the ratio of minimal sample variance among the k treatments and the sample variance with the control, H is the cdf of χ2 (m − 1) and g is the pdf of χ2 (n − 1). In general, (4) can be well approximated using the Gauss-Laguerre numerical quadrature using subroutines to evaluate H.

Spurrier’s test [15] has greater applicability while ensuring competitive efficiency, but it requires equal sample sizes among the treatment groups. [19] indicated that one may design an experiment that meets the sample size requirement, but the final available data might be unequal as a result of unexpected losses. Hence, the goal of this paper is to construct a more efficient test for general cases involving equal and unequal sample sizes based on inferential model [20] theory.

Methodology

Marginal inferential model framework

Different from frequentist and Bayesian inference methods, Fisher and Dempster intended to propose prior-free inference frameworks that produce probabilistic inferential results with desirable frequency properties. However, the small sample properties for the fiducial argument [21, 22] and Dempster-Shafer theory [23, 24] may not be calibrated for meaningful probabilistic inference. As an alternative [20], proposed an inferential model (IM) framework for prior-free probabilistic inference. In fact, IM has some connections to fiducial inference and Dempster-Shafer theory. The key difference between these three methods is the way to work out for auxiliary variables. In particular, the IM’s handling of the auxiliary variables can guarantee desirable frequency properties for all sample sizes.

In marginal inference problems, where only parts of the full parameter are of interest [25], developed a marginal inferential model (MIM) framework for marginal inference. The key idea of the MIM is to reduce the dimension of the auxiliary variable. In general, MIM starts with a system of equations, called association, representing a statistical model with unknown parameter θ = (ψ,ξ) for observable data X ~ PX|θ via an auxiliary random variable U. The initial model is expressed as if the goal were to simulate, i.e., (5) where p and a are known functions and U has a known distribution function. To emphasize that θ = (ψ,ξ), we can rewrite the association as (6)

Note that IM consists of a three-step inference procedure, which includes an association step (A-Step), a prediction step (P-Step), and a combination step (C-Step). Since ψ is the parameter of interest and ξ is the nuisance parameter, the marginal IM has the following three steps:

A-Step: Suppose that there are functions q, b and c, and new variables V = (V1,V2) ~ PV, such that (6) can equivalently be written as (7a) (7b)

Since the exact value of V2 does not provide any information about the interest parameter ψ, there is no benefit to retaining component (7b) and trying to predict the auxiliary variable V2. Clearly, the key idea of the MIM is that V1 is generally of a lower dimension than U. Therefore, the MIM for ψ is only based on the following association: (8)

The dimension-reduction step of the MIM guarantees efficient inference properties. The result of A-Step is a set-value mapping given by (9)

P-Step: Following [7], the auxiliary variable V1 can be predicted by specifying an optimal predictive random set , that is, (10)

If is a real value for the unobserved V1, from credible conditions [7], then S(V1) should be even better in the sense that there is a high probability that .

C-Step: Combine the association and predictive random set S(V1) to obtain the final predictive random subset of ψ, (11)

According to the inference framework of IM, given an assertion of interest A, MIM also provides two probabilistic measure functions about the uncertainty of A. The belief function (belX (A)) and plausibility function (plX (A)) are (12) (13)

In fact, belX (A) and plX (A) can be regarded as the minimum and maximum probabilities that support the truth of assertion A. Note that the plausibility function can be easily used to create a frequentist decision rule. We can reject the null hypothesis if plX (A) ≤ α. Moreover, the MIM 100(1 − α)% confidence interval can be obtained by computing confidence limits from {ψ: plX (ψ) > α}.

The IM-based method is exact in the sense that it does not need any asymptotic approximation. Moreover, IM’s output has a meaningful interpretation within and not just across experiments. According to [20], the IM test is valid if (14)

Moreover, if “≤ α" can be replaced by “= α", then the MIM method is efficient.

The proposed MIM test

In this section, we propose a marginal IM-based method for testing hypothesis (1). Suppose that the observations from the ith population πi follow the normal distribution , the null hypothesis in (1) can be transferred to an assertion , where i = 0,1, …, k, ni ∈ {1, 2,3, …, l}, k and l are positive integers. If evidence from the observable data suggests that the assertion B is false, the null hypothesis would be rejected. Let and . According to [26], the conditional association model based on the minimal sufficient statistics for is given by (15) where , Ui and are independent. Moreover, this association model can be equivalently simplified as (16)

Here are parameters of interest. For any and Ui/Vi, there exist a μi such that . Since there is no direct information that can be obtained about by knowing Ui/Vi, there is no benefit to retain the first equation in (16). To reduce the dimension of the auxiliary variable, we can ignore Ui (i = 0, 1, …, k) and work with auxiliary variables Vi (i = 0, 1, …, k) directly. Therefore, the initial association model of the marginal IM can be expressed as (17)

Note that the associations (17) play two distinct roles. Before experiment, the associations characterize how likely the observable to be. Once are observed, the true parameter can be obtained by solving the above equations. Clearly, the true value of will never be known, but we know exactly the distribution of . Therefore, we can predict the , so that we can make inference about .

However, the difficulty we encountered are as follows: On one hand, since the unknown parameters is (k + 1)-dimentional, so the auxiliary random variable should also be (k + 1)-dimentional, which might lead to poor efficiency especially when k is large. On the other hand, the hypothesis actually has constraints, i.e., (18)

Since the parameters have constraints, it is challenging to make inferences about the assertion B. Some techniques or strategies are needed.

First, we take a different perspective about (18), i.e., (18) is regarded as the parameter space rather than a constraint. This is reasonable when we look at the null hypothesis and alternative hypothesis in (1). If we take (18) as the parameter space, a straightforward transformation of (1) is (19)

Define (20)

Then the hypothesis can be transferred to (21)

We can see that (21) and (1) are equivalent under (18). Therefore, we only need to infer about assertion B = {θ = 1}. Note that assertion B only contains one-dimensional parameters. Next, we rewrite association (17) as follows: (22) (23)

Using partial information () from the null hypothesis in (1), is a constant. Hence, the minimum of is corresponds to the minimum of , i.e., (24)

Combining equations (22) and (24), (20) can be written as (25)

Clearly, the auxiliary variable can be separated from data . According to [27], the distribution of does not rely on the observable data . Let F denote the distribution function of the right side of equation (25), and we can construct a new MIM-based procedure to make inferences about assertion B = {θ = 1} as follows:

A-Step: The final association model of the MIM is given by (26) where u~Unif(0,1) and F−1 is the inverse function of F.

P-Step: Different kinds of assertions have different expression forms of the corresponding valid predictive random sets. For a left-sided assertion (21), a possible optimal predictive random set S(u) for predicting the auxiliary variable u, that is, (27)

Theorem 1. According to [20], for a left-sided assertion, the predictive random set S(u) = {u: 0 ≤ uU},U~Unif(0, 1) is optimal in the sense that PS(u) {uS(u)} ~ Unif(0,1).

Proof. Let . The predictive random set S(u) is optimal in the sense that PU{QS(u) (U) ≥ 1 − α} = α for each α ∈ (0,1); hence

Hence, the proof is complete.

C-Step: Combine (26) and (27), we have (28)

Then, the plausibility function for assertion B = {θ = 1} is (29)

Theorem 2. The proposed MIM inference method can control the type I error rate at a preset level α ∈ (0,1), i.e.,

Proof. Since and

Hence, the proof is complete.

Simulation study

The proposed marginal IM method is an accurate testing method, and the efficiency of MIM inference does not require simulation verification. However, the p value of Spurrier’s optimal test needs to be approximated using the Gauss-Laguerre numerical quadrature. In fact, it is an approximate method. Moreover, due to the asymptotic properties of large samples, the performance of the two test methods tends to be consistent. For better comparison, we conduct Monte Carlo simulations to assess the performances of the MIM-based test and Spurrier’s test in various small sample situations. The parameters of the comparison are mainly the type I error rate and power. The parameter settings refer to [20] and [10]. In the experiment, we consider various cases with different sample sizes (n0, n1, …, nk) and different treatment groups for k = 3, 5, and 7. In each case, we repeat the experiment 10,000 times to evaluate the significance level of 5% which provides a 95% confidence interval of the type I error rate as (0.0457, 0.0543) for a 5% error rate. Tables 1 to 3 summarize the results of the comparisons of the type I error rates and empirical powers of these two methods for small sample sizes. Note that the first component of the combination (n0, n1, …, nk) is for the control group; for example, (n0, n1, n2, n3) = (5, 6, 7, 8) represents that the control group has five samples.

thumbnail
Table 1. The estimated type I error rates and powers for k = 3.

https://doi.org/10.1371/journal.pone.0296376.t001

thumbnail
Table 2. The estimated type I error rates and powers for k = 5.

https://doi.org/10.1371/journal.pone.0296376.t002

thumbnail
Table 3. The estimated type I error rates and powers for k = 7.

https://doi.org/10.1371/journal.pone.0296376.t003

When the sample sizes (n0, n1, …, nk) are equal, it is easy to see that both the MIM test and the Spurrier test can exactly control the type I error rate. Moreover, the powers and the Type I error rates of the MIM test and Spurrier test are almost the same. One possible reason is that both tests utilize the same information from the given data. Specifically, they calculate the minimum sample variance among the treatment groups and the sample variance within the control group. Hence, the p value of Spurrier’s test (4) and plausibility of the MIM test (29) are similar in expression. Moreover, the pivots statistics of Spurrier’s test are ordered statistics of the multivariate F distribution. Although IM theory is different from classical statistics, we may conclude that the distribution function F in (29) is connected to the ordered statistics of the multivariate F distribution. However, Spurrier’s test does not work when the sample sizes of the treatment groups are unequal. In these cases, the proposed MIM test also has a type I error rate at a preset level, α ∈ (0.0457, 0.0543). Therefore, the MIM test outperforms Spurrier’s test because it has more flexible applicability while maintaining competitive efficiency.

To be more informative, the efficient property in Theorem 1 is automatic if the Monte Carlo approximation pl(B) follows the uniform distribution in (0,1). To obtain a better understanding of the good performance of the MIM test, for each (n0, n1, n2, n3) ∈ {(5, 5, 5, 5), (5, 10, 10, 10), (5, 5, 10, 10), (5, 6, 7, 8)}, letting μ1 = μ2 = 0 and , we generate 10,000 normal random samples and obtain a Monte Carlo estimate of the distribution function of pl(B). Figs 14 show that the distribution function of the approximate pl(B) is sufficiently close to that of Unif(0,1). Therefore, the MIM test controls the type I error rate exactly.

thumbnail
Fig 1. Empirical distribution functions of pl(B) (solid) compared with that of Unif(0,1) (dotted) based on the random samples, where (n0, n1, n2, n3) = (5, 5, 5, 5).

https://doi.org/10.1371/journal.pone.0296376.g001

thumbnail
Fig 2. Empirical distribution functions of pl(B) (solid) compared with that of Unif(0,1) (dotted) based on the random samples, where (n0, n1, n2, n3) = (5, 10, 10, 10).

https://doi.org/10.1371/journal.pone.0296376.g002

thumbnail
Fig 3. Empirical distribution functions of pl(B) (solid) compared with that of Unif(0,1) (dotted) based on the random samples, where (n0, n1, n2, n3) = (5, 5, 10, 10).

https://doi.org/10.1371/journal.pone.0296376.g003

thumbnail
Fig 4. Empirical distribution functions of pl(B) (solid) compared with that of Unif(0,1) (dotted) based on the random samples, where (n0, n1, n2, n3) = (5, 6, 7, 8).

https://doi.org/10.1371/journal.pone.0296376.g004

Applications

In this section, we use two real examples to illustrate the proposed MIM test.

Example 1 [2] considered four groups of men in the 35–45 years age bracket: (i) nonsmokers (control), (ii) former smokers (treatment), (iii) light smokers (treatment), and (iv) heavy smokers (treatment). Each group consisted of ten men and Table 4 shows the testosterone levels measured in μg/dl. It is known that smoking has a negative impact on testosterone levels, leading to less variability, as some of them may have low testosterone levels for other reasons, while healthy people will have high levels. One question then asked is whether any of the testosterone levels among the three groups of smokers (including former smokers, light smokers and heavy smokers) have less variability than nonsmokers.

By calculation, the p-values of the four groups by the Shapiro–Wilk normality tests are 0.5540, 0.6516, 0.4525 and 0.2398, respectively, and we accept the normality assumption for the data. These four groups give sample variances of 0.0520, 0.0389, 0.0250 and 0.0075. Moreover, the MIM and Spurrier tests have the same p-value = 0.0117. Hence, we can reject the null hypothesis at significance level α = 0.05, i.e., at least one smoking group has less variability in testosterone levels than the nonsmoking group.

Different from other methods, the MIM method has a significant advantage in that it can provide probabilistic summaries of the information in data concerning the quantity of interest B. To be more informative, we plot the plausibility function pl(B), where B = {θ}, as a function of θ in Fig 5. By locating α on the vertical axis, the corresponding 1 − α MIM confidence interval can be easily obtained. More importantly, each point in the MIM interval is individually sufficiently plausible.

thumbnail
Fig 5. Plausibility function of the MIM based on the sample variance (S02, S12, S22, S32) = (0.0520, 0.3893, 0.2496, 0.0075), as a function of θ.

https://doi.org/10.1371/journal.pone.0296376.g005

Example 2 To demonstrate the flexibility of the MIM method, the second dataset shown in Table 5 [3] contains blood count measurements on three groups of animals, one of which served as a control while the other two were treated with two drugs. Since the Shapiro–Wilk normality tests of the three groups give p-values of 0.4834, 0.6942 and 0.5483, we accept the normality assumption for this dataset. Moreover, the sample variances of the control, Drug A and Drug B groups are 0.8841, 0.8165 and 2.4240, respectively.

The problem of interest is to test whether the variability in blood counts following treatment with Drug A and Drug B is smaller than that in the control. Due to accidental losses, existing methods cannot be applied to these data because the numbers of animals in the three groups are unequal. As an alternative, from Fig 6, the plausibility function of the MIM gives a plausibility of 0.6804 in this situation. Hence, there is no significant evidence suggesting rejecting the null hypothesis that the variability in blood counts following treatment with Drug A and Drug B is the same as that of the control.

thumbnail
Fig 6. Plausibility function of the MIM based on the sample variance (S02, S12, S22) = (0.8841, 0.8165, 2.4240), as a function of θ.

https://doi.org/10.1371/journal.pone.0296376.g006

Discussion

In applied statistics, there often exists a common and important problem comparing the variances of experimental treatments with that of a standard or control treatment under the assumption that the measurements are independent and normally distributed. The existing test procedures are not well developed for testing the homogeneity of variances under a control group and require the sample sizes to be equal. In real data analysis, the requirement for equal sample sizes of experimental treatments may not be satisfied because of accidental losses or other unexpected circumstances. To date, a more general test method is needed.

It is crucial to construct an appropriate nonprior, frequency calibrated testing method. In this paper, we propose a new test method based on the marginal inferential model framework. The proposed MIM method has at least three contributions as follows: first, different from the general IM framework, the new MIM method utilizes partial information from the null hypothesis to construct accurate testing methods. This idea complements the precise theory of statistical inference. Second, the constructed association model in (26) is the key to accurate inference of the MIM. Since the distribution of the righthand side of equation (25) does not rely on the observable data, we demonstrate that the MIM method has an accurate type I error rate but does not require simulation verification. Finally, the MIM test does not require the sample size among the treatment groups to be equal, while other tests require the sample size to be equal. Note that the MIM has an advantage in providing valid probabilistic uncertainty quantification. Unlike the p value of Spurrier’s test, the output of the MIM test, i.e., plausibility, is posterior-probabilistic in nature and therefore has a meaningful interpretation within and not just across experiments. Therefore, the plausibility function provides more information than the p value of the frequentist approach because even a large p value cannot “confirm” the truth of the null hypothesis.

As we focus on comparing the variances of the normal distribution, this implies that the underlying distribution of the data needs to be normally distributed. A potential challenge of the proposed procedure (as well as other methods based on parametric models) is that we do not know the accurate distribution of the data. In our real-data applications, we apply the Shapiro–Wilk normality test to test whether the data are normally distributed. Even though the p values in two real-data examples are greater than a specified significance level, the data are still likely to be nonnormal, especially with a relatively small sample size. One possible way to alleviate this challenge is to increase the sample size of the data.

Different from traditional IM-based test methods, the proposed MIM solution uses part of the information given in the null hypothesis to reduce the dimension of the auxiliary variable and gains more validity. This idea could be applied to other multiple comparison procedures for comparing several treatments with a control. For instance, the methodology can be extended to the comparison of the mean of the normal distribution, where the mean is of interest and the variance is a nuisance parameter. In this case, the marginal association (e.g., Eqs (17) and (24)–(26)) would be equations concerning the normal mean. Comparison for the mean is more complicated since different marginalization techniques might result in varying inferential results. Indeed, we are looking for the best marginalization technique for the comparison of the mean. Some studies are still ongoing. Finally, for two-sided tests, since the dimension of the auxiliary variable is two-dimensional, there could be interest in the simultaneous prediction of several auxiliary variables. The optimal predictive random set needs further study.

Supporting information

Acknowledgments

We are grateful to the academic-editor and anonymous referees whose comments helped to significantly improve our manuscript. We thank Prof. JH and Prof. LY for their helpful suggestions about the manuscript.

References

  1. 1. Singh RC, Singh P. Comparing several normal variances with a control using sample quasi range. Communications in Statistics—Simulation and Computation. 2018; 49:396–407.
  2. 2. Le CT. Some tests for linear trend of variances. Communications in Statistics–Theory and Methods. 1994; 23:2269–2282.
  3. 3. Dunnett CW. A multiple comparison procedure for comparing several treatments with a control. Journal of the American Statistical Association. 1955; 50:1096–1121.
  4. 4. Dunnett CW. New tables for multiple comparisons with a control. Biometrics. 1964; 20:482–491.
  5. 5. Bechiiofer RE. Multiple comparisons with a control for multiply-classified variances of normal populations. Technometrics. 1968; 10:715–718.
  6. 6. Kwong KS. An algorithm for construction of multiple hypothesis testing. Computational Statistics. 2001; 16:165–171.
  7. 7. Wilcox RR. An improved method for comparing variances when distributions have non-identical shapes. Computational Statistics & Data Analysis. 1992; 13:163–172.
  8. 8. Wilcox RR. Comparing the variances of two independent groups. British Journal of Mathematical and Statistical Psychology. 2002; 55:169–175. pmid:12034018
  9. 9. Noguchi K, Gel YR. Combination of levene-type tests and a finite-intersection method for testing equality of variances against ordered alternatives. Journal of Nonparametric Statistics. 2010; 22:897–913.
  10. 10. Pauly M. Discussion about the quality of F-ratio resampling tests for comparing variances. TEST. 2011; 20:163–179.
  11. 11. Chu JT. Some uses of quasi-ranges. The Annals of Mathematical Statistics. 1956; 28:173–180.
  12. 12. Leone FC, Rutenberg YH, Topp CW. The use of sample quasi-ranges in setting confidence intervals for the population standard deviation. Annals of Mathematical Statistics. 1961; 56:260–272.
  13. 13. Cadwell JH. The distribution of quasi-range in samples from a normal population. The Annals of Mathematical Statistics. 1953; 24:603–613.
  14. 14. Patel JK, Wyckoff J. Classifying normal populations with respect to control using quasi ranges on censored data. American Journal of Mathematical and Management Science. 1990; 10:367–385.
  15. 15. Spurrier JD. Optimal designs for comparing the variances of several treatments with that of a standard treatment. Technometrics. 1992; 34:332–339.
  16. 16. Singh P, Gill AN. A One-Sided Test Based on Sample Quasi Ranges. Communications in Statistics—Theory and Methods. 2005; 33:835–849.
  17. 17. Singh P, Gill AN, Kumar N. A test of homogeneity of several variances against tree ordered alternative. Statistics & probability letters. 2009; 79:2315–2320.
  18. 18. Singh P, Goyal A, Gill AN. A note on comparing several variances with a control variance. Statistics & probability letters. 2010; 80:1995–2002.
  19. 19. Kong JS, Jin H, Lu HZ, Lin JY, Jin K. An inferential model-based method for testing homogeneity of several variances against tree-ordered alternatives. International Journal of Approximate Reasoning. 2023; 152:344–354.
  20. 20. Martin R, Liu CH. Inferential Models: A Framework for Prior-Free Posterior Probabilistic Inference. Journal of the American Statistical Association. 2013; 108: 301–313.
  21. 21. Fisher RA. Statistical Methods and Scientific Inference. New York: Haffner Press; 1973.
  22. 22. Hannig J. On generalized fiducial inference. Statistica Sinica. 2009; 19:491–544.
  23. 23. Dempster AP. The Dempster–Shafer calculus for statisticians. International Journal of approximate reasoning. 2008; 48:365–377.
  24. 24. Shafer GA. A mathematical theory of evidence. Technometrics, 1978; 20:106–106.
  25. 25. Martin R, Liu CH. Marginal inferential models: prior-free probabilistic inference on interest parameters. Journal of the American Statistical Association. 2015; 110:1621–1631.
  26. 26. Martin R, Liu CH. Conditional inferential models: combining information for prior-free probabilistic inference. Journal of the Royal Statistical Society Series B-Statistical Methodology. 2015; 77:195–217.
  27. 27. Lu HZ, Cai FJ, Li Y, Ou XH. Accurate interval estimation for the risk difference in an incomplete correlated 2×2 table: Calf immunity analysis. PLoS ONE. 2022; 17(7):e0272007. pmid:35867721