Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

On closed-form tight bounds and approximations for the median of a gamma distribution

On closed-form tight bounds and approximations for the median of a gamma distribution

  • Richard F. Lyon
PLOS
x

Abstract

The median of a gamma distribution, as a function of its shape parameter k, has no known representation in terms of elementary functions. In this work we use numerical simulations and asymptotic analyses to bound the median, finding bounds of the form 2−1/k(A + Bk), including an upper bound that is tight for low k and a lower bound that is tight for high k. These bounds have closed-form expressions for the constant parameters A and B, and are valid over the entire range of k > 0, staying between 48 and 55 percentile. Furthermore, an interpolation between these bounds yields closed-form expressions that more tightly bound the median, with absolute and relative margins to both upper and lower bounds approaching zero at both low and high values of k. These bound results are not supported with analytical proofs, and hence should be regarded as conjectures. Simple approximation expressions between the bounds are also found, including one in closed form that is exact at k = 1 and stays between 49.97 and 50.03 percentile.

Introduction

The gamma distribution PDF is , but we’ll use θ = 1 because both the mean and median simply scale with this parameter. Thus we use this PDF with just the shape parameter k, with k > 0 and x ≥ 0:

The mean of this distribution, μ, is well known to be μ(k) = k. The median ν(k) is the value of x at which the CDF equals one-half:

This equation has no easy solution, but the median is well known to be a bit below the mean, bounded by [1]

Bounds that are tighter in some part of the shape parameter range can be obtained from the known Laurent series partial sums [2, 3], or from the low-k asymptote and bounds of Berg and Pedersen [3].

The Chen and Rubin bounds are tight [1]; the upper bound in the low-k limit and the lower bound in the high-k limit. Recently, Gaunt and Merkle [4] proved that the line of slope 1 that intersects ν(k) at the known value ν(1) = log 2 is an upper bound for k ≥ 1; that is, ν(k)<k − 1 + log 2, much tighter than the ν(k)<k bound, leveraging the prior result at integers by Choi [2] and the result by Berg and Pedersen that the slope of the median ν′(k) is everywhere less than 1 [3]. As shown in Fig 1, this new upper bound can be combined with a chord for 0 ≤ k ≤ 1, based on convexity shown by Berg and Pedersen [5]. Convexity also implies that any tangent line is a lower bound, and we show later how to find the slope ν′(1) at the point where the value ν(1) = log 2 is known; that new linear lower bound is included in Fig 1 with the prior linear and piecewise-linear bounds.

thumbnail
Fig 1. Linear and piecewise-linear bounds.

The bounds and ν(k)>0 (solid lines) are shown along with the true value (solid curve), the piecewise-linear bound that combines the recent linear bound for k > 1 [4] with a chord segment (dashed lines), and the linear lower bound that is tangent at k = 1 (dash-dot line). The region k < 1 is not very usefully bounded.

https://doi.org/10.1371/journal.pone.0251626.g001

A Laurent series for ν(k) with rational coefficients has been discovered, with deep connections to some math by Ramanujan. Choi [2] applied Ramanujan’s work to this particular question, providing 4 coefficients (through the k−3 term). Berg and Pedersen [3], based on work by Marsaglia [6], extended this to 10 coefficients. Neither commented on the radius of convergence, which appears to be in the neighborhood of k = 1. For large enough k and N, the series yields excellent approximations, but for k < 1 it is useless; convergence near k = 1 is very slow. with . Thus (where Choi [2] had 144 instead of the correct 23 ⋅ 23 = 184):

Partial sums of a Laurent series are not generally bounds, but the first two (k and k − 1/3) are upper and lower bounds [1], respectively, and the sums ending with −3 and −5 powers of k also appear (numerically) to be upper and lower bounds, respectively.

Berg and Pedersen [3] also derived an asymptote for small k, which we call ν0(k): where γ ≈ 0.577216 is the Euler–Mascheroni constant. This asymptote is a lower bound, as we will show. Berg and Pedersen [3] also provide an upper bound ν(k)<e−1/3k k that is just above , and a lower bound ν(k)>2−1/k k.

These previous known bounds are illustrated in Fig 2, where upper and lower bounds are distinguished by different line styles. The factor 2−1/k is the key to good approximations and bounds, and can be divided out to reduce the dynamic range of values on later plots of ν versus k. For the range 0.001 < k < 100, this reduces the dynamic range we need to work with by about 300 orders of magnitude.

thumbnail
Fig 2. Previously known bounds.

Previously published bounds (lower bounds solid, upper bounds dashed) for the median of a gamma distribution (heavy dotted), are good at high k or low k, but not both. At the left, at k = 0.01, the median is near 10−30. Only ν0(k) is close at low k, and it was published as an asymptote, but not a bound; it is strictly less than our new lower bound νΓ(k) (dotted).

https://doi.org/10.1371/journal.pone.0251626.g002

Others have shown good bounds and approximations where k is an integer, that is, for the Erlang distribution [2, 79]; these do not help us for 0 < k < 1. The Wilson and Hilferty cube-root transformation [10] of chi-square leads to an approximation at half-integers that is apparently an upper bound: for ; if it is, then dropping the final negative term would make it an upper bound for all k: , which is a bit tighter than the upper bound that Berg and Pedersen [3] proved from their upper bound ν(k)<ke−1/3k.

We seek upper and lower bounds that are tighter, especially in the middle part of the k range (near k = 1), than are previously known. Further, we seek simple approximation formulae for the median, leveraging these bounds via interpolation between them. Main results are summarized in two tables in sections to follow.

The approach of approximating functions by interpolating between upper and lower bounds has been discussed by Barry [11], who used minimax optimization to find numeric parameters in an interpolation-between-bounds approximation to the exponential integral. We are not aware of an interpolation approach being used to find improved bounds in closed form, as we propose here.

A family of asymptotes and bounds

The Berg and Pedersen lower bounds 2−1/k eγ and 2−1/k k are tight within their families 2−1/k A and 2−1/k Bk, but not as tight as possible within the wider two-parameter family 2−1/k(A + Bk). We find (via numeric and asymptotic evidence, but without proof) that the sum of these two is the uniquely tight upper bound in that family, and that there is a range of tight lower bounds in the family.

To improve on the lower bounds, and to motivate the family that we consider further, first solve for ν(k) in a simple approximation to the distribution’s integral, using ex < 1 for x > 0: which is a tight lower bound, and is a good approximation for k < 0.1, but not so great at high k—and not what we consider a closed form, due to the gamma function. The factor 2−1/k, in common to this bound and to Berg and Pedersen’s lower bound and asymptote, from a focus on low k instead of the usual high k, is key to our approach.

As an aside, since νΓ(k) is a lower bound, we can prove that Berg and Pedersen’s asymptote ν0(k) = eγ2−1/k is a lower bound by showing that eγ < Γ(k + 1)1/k. That limk → 0 Γ(k + 1)1/k = eγ follows from the Taylor series about 0 of Γ(k + 1), which is 1 − γk + O(k2). That Γ(k + 1)1/k increases monotonically from there, even though Γ(k + 1) is decreasing, is proved by showing that its derivative is everywhere positive. Differentiating, in terms of the digamma function ψ(0), the logarithmic derivative of the gamma function, we find:

The only factor here that is not obviously positive for k > 0 is (0)(k + 1) − log (Γ(k + 1), which at k = 0 is equal to 0, and has a surprisingly simple derivative:

Here ψ(1) is the trigamma function, the derivative of the digamma function. This derivative is positive since the trigamma function, a special case of the Hurwitz zeta function, is positive for real arguments, because it has a series expansion with all positive terms:

Therefore Γ(k + 1)1/k is increasing, as was also obvious numerically, so the Berg and Pedersen asymptote is a lower bound.

The νΓ(k) expression resembles the “quantile mechanics” boundary condition for the gamma distribution from Steinbrecher and Shaw [12]. More generally, their work implies that (uΓ(k + 1))1/k is a lower bound for the u quantile of the gamma distribution, if the coefficients of their power-series recurrence are all positive, for all 0 < u < 1 and all k > 0, as they appear to be. Nevertheless, since they don’t claim it as a bound, we say that νΓ(k) is new. If their coefficients are all positive, then partial sums of their power series form a family of lower bounds.

The lower bound νΓ(k) converges with the Berg and Pedersen asymptote at low k. We can improve Berg and Pedersen’s asymptote in closed form by utilizing another term. A symbolic calculus system finds for us the next Taylor series terms about k = 0 for the power of the gamma function:

With the additional term, we have an improved approximation, which has much less relative error than Berg and Pedersen’s at low k, and has a high-k behavior nearly proportional to k (but is not a lower bound because we made it larger by ignoring a next negative term):

Inspired by this asymptotic approximation, we consider members of this family of functions, with coefficients A and B, and analyze which ones are bounds:

Bounds and asymptotic approximations in this family, described in the next section, and a few others are summarized in Table 1. Some of these are the basis for later sections on interpolation between bounds.

Tight upper and lower bounds

A graphical characterization of this family is most informative. Given some values of k and corresponding numerical ν(k), we can find the lines A + Bk = ν(k)21/k in AB space, and plot them—see Fig 3. Regions full of lines are not bounds, and regions without lines are where bounds are found (including some of Berg and Pedersen’s bounds); we’re interested in the boundaries between these regions, where tight bounds are to be found.

thumbnail
Fig 3. Parameter space.

The AB parameter space is shaded with dash-dot lines where ν(k)21/k = A + Bk, for a set of very small to very large k values in geometric progression (using numerically computed ν(k) values). Key values of A and B are indicated. Points outside (or on the edge) of the shaded region represent bounds, while points inside the shaded region represent functions that cross the median function. There is an obvious uniquely tight upper bound νU, and a curved locus of tight lower bounds from νL0, which is tightest near k = 0, to νL, which is tightest for high k. One point (pentagram) on the curved locus represents a lower bound νL1 that is tight at k = 1, for which A + B = 2 log 2 (which is the equation of the dashed line). The point ν1 represents a good asymptotic approximation close to νL0, but not a bound; see the next figure. The dotted line from νU to νL at B = 1 intersects the lines for all k in monotonic order, with A decreasing while k increases.

https://doi.org/10.1371/journal.pone.0251626.g003

The improved asymptote ν1 is great at low k, but is neither an upper nor a lower bound, as shown in Fig 4. We can modify it to approach at high k by a few adjustments, via this asymptotic approximation that we get from a symbolic calculus system:

thumbnail
Fig 4. Parameter space detail.

Zooming in to the parameter neighborhood of ν1 and νL0, note that the point with B = eγ π2/12, which we got from the Taylor series of the power of the gamma function, is actually inside the shaded area, so does not represent a bound; but a point at slightly lower B = 0.45965 is on the edge, so represents a lower bound. These points give zero error at approximately k = 0.1003 and k = 0.0708, respectively (see their errors, or margins, in Fig 5). We do not have analytic formulations for these numeric and graphical observations.

https://doi.org/10.1371/journal.pone.0251626.g004

Thus we find this approximation for high k, which appears to be a lower bound as illustrated in Fig 3:

A compromise approximation for low k mixes these two, differing from the high-k approximation in only the A coefficient, leaving a result consistent to the same order as Berg and Pedersen’s asymptote at low k, and forming an upper bound as illustrated in Fig 3:

This mixed approximation has absolute and relative errors approaching zero at low k, and relative error approaching zero at high k; but the absolute error remains high, near , at high k. These approximations and their errors are illustrated in Fig 5.

thumbnail
Fig 5. New bounds and their margins, premultiplied by 21/k.

The lower bounds νΓ, νL0, νL1, and νL (solid), and upper bound νU (dashed) are shown in red over the ideal median (black heavy dots), with their absolute errors in blue, all premultiplied by 21/k to reduce the required plot range. The approximation ν1, which is not a bound, is also shown; note that its error curve changes from solid to dashed at the cusp, while the margins for νL0 and νL1 have cusps (on the log scale) where the margins graze zero but do not change sign. The k parameters at these cusps correspond to the sloped black lines indicated in the previous two figures.

https://doi.org/10.1371/journal.pone.0251626.g005

These observations are consistent with what Fig 3 suggests: that AeγB ≥ 1 is a necessary and sufficient condition for 2−1/k(A + Bk) to be an upper bound to the median, with equality for the tightest upper bound. And for lower bounds, the condition AeγB ≤ 1 is necessary, but not sufficient.

To support the graphical/numerical observation that νL(k) and νU(k) are lower and upper bounds, respectively, of the true median (νL(k)<ν(k)<νU(k)), we examine their asymptotic behaviors in more detail. At low k, it is easy to see, using , and , that these differences are positive, for k → 0:

At high k, a symbolic calculus system gives us for νL(k): which we can use to construct comparisons to the Laurent series terms for ν(k) [2, 3]. Again we find positive differences, with , and , for k → + ∞:

In addition to these high-k and low-k asymptotic results, we can show the inequalities also hold at k = 1 where , but otherwise we’re relying on the graphical and numerical results. But though there is considerable margin in the asymptotes, and the median is well behaved (unique, monotonic, and smooth, with positive second derivative for all k > 0 [3, 5]), we do not have a proof that these are bounds. But if they are bounds, they are tight, in the sense that the positivity constraints would not all hold, since higher-order terms would not cancel, if AL or BL were any higher, or if AU or BU were any lower.

For the lower bound νL1(k) that is tight at k = 1, both the value and the slope need to match the true median. The value is the median of the exponential distribution, ν(1) = log 2. The slope ν′(k) is somewhat more troublesome to work out, but is tractable at the special point k = 1, where the CDF P(k, x) (the lower incomplete gamma function) and PDF p(k, x) are both exponential functions.

At the point where , where x = ν(k), the slope is:

The derivative with respect to x is easy, except that we only have a closed-form relation between x and k at k = 1, where we know x = ν(1) = log 2 and , so the derivative is there. The derivative with respect to k is messier:

At k = 1, using Γ(k) = 1 and (k)/dk = −γ, this derivative evaluates to where Ei(− log 2) = −0.3786710 is the exponential integral (integration and evaluation assisted by Wolfram Alpha). Putting these results together we get the slope of the median: which is a mathematical expression with a definite value, but is not a closed form due to the exponential integral, so still requires a numerical approach to evaluate it. Therefore, we describe this bound with approximate numerical parameters instead of closed-form analytic expressions.

Unlike the straight-line lower bound ν(k) < log 2 + (k − 1)ν′(1), which easily follows by convexity of ν(k), this new function νL1(k) that is tangent at the same point is not proved to be a bound.

The conjectured tight lower bound νL0(k) is in worse shape, as we have to search for the k value that gives the lowest B value with A = eγ. So its B parameter has no concise mathematical expression, but can be computed to high precision: B ≈ 0.4596507.

The conjectured bounds and asymptotic approximations discussed here are summarized in Table 1. In subsequent sections we focus primarily on the new upper and lower bounds with closed-form coefficients, νU(k) and νL(k), as a basis for even tighter closed-form bounds. Fig 6 shows the percentile values achieved by these bounds and approximations, compared to the ideal 50% that defines the median: νL(k) always comes in between 48% and 50% and νU(k) always between 50% and 55% (percentiles are calculated in Matlab as 100*gammainc(x, k), using the normalized lower incomplete gamma function that is the CDF for our PDF).

thumbnail
Fig 6. Percentiles achieved.

The percentiles achieved by four conjectured median bounds of the form 2−1/k(A + Bk) (solid curves) are plotted, along with the straight-line bounds (dotted), upper and lower bounds from Berg and Pedersen [3] (dash-dot), and a pair of closer bounds formed by interpolation between νU(k) and νL(k) using a one-parameter rational function (dashed). The bounds 2−1/k k, νU(k), νL(k), and the interpolated bounds converge on 50th percentile at both low and high k, while the other eight do not. Both the upper and lower interpolated bounds are close to νU(k) at low k and close to νL(k) at high k; tighter such interpolated bounds, developed in a later section, would crowd the center of the graph.

https://doi.org/10.1371/journal.pone.0251626.g006

The coefficient of k−1 for νL(k) is negative, so the lower bound νL(k) at high k is less than the lower bound, in spite of having asymptotically zero absolute margin. That is, for k > 3.021, it’s a looser lower bound and a worse approximation than k − 1/3, even though it is the tightest lower bound of the form we’re considering. On the other hand, νU(k) is a much tighter upper bound than k is, for all k, with an asymptotic margin near ; for k ≥ 1, the recent straight-line bound k − 1 + log 2 [4] is tighter still, with an asymptotic margin .

Formulae for tighter bounds

Letting A and B be functions of k, rather than constants, allows tighter bounding expressions (and potentially exact expressions) for ν(k), but not enough structure. Allowing only one of them to vary, and tying the other to values used in the tight bounds above, allows a more constrained space of bounds.

With BL = BU = 1, we can express the median exactly as ν(k) = 2−1/k(A(k) + k), for some smooth positive real function A(k) that runs from a limit of AU = eγ as k → 0 to as k → + ∞; it is apparently monotonic. Alternatively, using AL0 = AU = eγ, the formula ν(k) = 2−1/k(eγ + B(k)k) has a smooth positive but non-monotonic function B(k) that runs between limits and 1, but drops a little below its low-k limit before increasing.

This approach converts the problem of finding tighter bounds to the median to the problem of finding closed-form expressions to bound these more well-behaved functions. Calculating A(k) and B(k) numerically to high precision is easy when the median can be calculated; see Fig 7. For the rest of this paper, we focus on A(k), since it is monotonic and more nearly symmetric on a log k axis, and because it corresponds to interpolation between closed-form bounds.

thumbnail
Fig 7. Ideal parameters as functions of k.

Functions A(k) and B(k), either of which can solve ν(k) = 2−1/k(A + Bk), with the other constant at the limiting values indicated by the circles, the parameters of νU. Modeling either of these curves can lead to better bounds or approximations.

https://doi.org/10.1371/journal.pone.0251626.g007

Toward a proof

The conjectured inequalities νL(k)<ν(k)<νU(k) are equivalent to ; that is, that A(k) stays above its high-k limit and below its low-k limit. We know that the asymptotic slopes of A(k) are negative at both ends (and in the middle at k = 1), so it will be sufficient to show that the slope is negative everywhere; or that the function A(k) is convex. It is not quite enough that A(k) comes from monotonic and convex parts 21/k and ν(k). The proof of convexity of ν(k) [5] was complicated, and similar techniques might be needed here.

Numerically, we can see that A(k) is mostly sufficiently confined between other known bounds in different parts of the k range, but bounding it with other bounds could be a hard way to construct a proof. First, we’d need a better low-k upper bound, which we might get using 1 − x < ex as we used ex < 1 in finding νΓ(k). And on the lower side we could attempt to prove that the quantile mechanics [12] partial sums are lower bounds that are tighter, at low k, and that the Laurent series partial sum through the k−5 term is a tighter lower bound at high k.

Lacking proof of our main result, we cannot even start to prove the tighter bounds found in coming sections, based on interpolation, but again we have good confidence from asymptotic and numerical results. In some cases, asymptotic analysis yields closed-form expressions for the tightest bounds of the families, while numerical methods support the conjecture that they are bounds.

Interpolators

The function A(k) introduced above can be represented in terms of an interpolation function g(k) that runs monotonically from a low-k limit of 0 to a high-k limit of 1:

And g(k) is therefore also the function that interpolates between the bounds, allowing us to write the median in these convenient ways:

The ideal interpolator can be computed numerically from A(k) or from ν(k):

It can be interesting to bound or otherwise approximate g(k). In approximating the ideal with an interpolator , we achieve absolute and relative error of the median estimate approaching zero at low k if , and at high k if . But we might want to do better, matching the asymptotic slopes of the ideal interpolator to match the median to a higher order; or we might want to match the exact known value ν(1) = log 2. So we analyze these properties of the ideal interpolator, and give them names. At low k:

At high k:

And at k = 1:

How such improved-approximation goals relate to bounds is not immediately clear. In the next two sections, we construct some examples, with results summarized in Table 2. Most of the interpolated approximations and bounds listed are closed-form analytic expressions, but a few others are numeric and approximate.

Fig 8 shows the ideal interpolator, computed numerically, and compares it to bounding interpolators of the forms and , with parameters b0 and b chosen to yield tight upper and lower bounds. The effects of these interpolator bounds on the median bounds is shown in Fig 9.

thumbnail
Fig 8. Bounding the ideal interpolator function.

The ideal interpolator g(k) (heavy dotted sigmoid) is compared with upper and lower bounds ; their margins are also plotted, magnified and displaced, with the same curve styles. The curves with largest absolute margins (dashed), which correspond to the interpolated bounds shown in Fig 6, are for the first-order rational-function interpolator , while the curves with smaller margins (solid) are for arctan interpolators . In each case, the one parameter (b0 or b) is chosen to give a tight bound (analytically in closed form in three of the four cases). Lower bounds of g(k) make upper bounds of ν(k), and vice versa.

https://doi.org/10.1371/journal.pone.0251626.g008

thumbnail
Fig 9. Margins of interpolated bounds, premultiplied by 21/k.

The absolute margins of the interpolated bounds are smaller than those of the bounds they started from. Compare Fig 5. The generally smallest margins are for the arctan interpolators, and the intermediate for the N = 1 rational-function interpolators.

https://doi.org/10.1371/journal.pone.0251626.g009

Rational-function interpolators

Consider rational functions as interpolators, of the form

For N = 1, the only parameter is b0, so we have a one-parameter family:

For N = 2 we have three parameters: and so forth.

We can easily constrain the coefficients to match the properties of the ideal interpolator. At low k:

At high k

And at k = 1:

For N = 1, the low-k asymptote is tightly approached with , yielding an upper bound for the median (lower bound for the interpolator gideal). Or the high-k asymptote is tightly approached with b0 = P, yielding a lower bound for the median (upper bound for g(k)). These bounds are illustrated in Fig 8. For b0 between these values, the resulting interpolated function is not a bound, but is exact at one value of k, for example at k = 1 with .

For N = 2, there are enough parameters to use any or all of the three constraints, but it’s not immediately clear which sets of constraints can lead to bounds. Certainly using all three constraints does not lead to a bound, but to an interesting approximation. As we did with the A versus B space, we can investigate the locus of parameter solutions for each k, and examine the edges of these locus-filled areas for tight bounds. Reducing the space to 2D by constraining one asymptote or the other allows a graphical approach, but the results are not better than a one-parameter arctan interpolator.

For N ≥ 3, excellent approximations and bounds are possible, but with so many parameters are not very interesting. For example, we can choose to constrain both asymptotes to a higher-order fit, and to constrain the value at k = 1, minimizing the maximum relative error with the two remaining parameters. It’s an excellent fit in all regions, with maximum relative error of 0.00018, but could be even better in the middle without the constraints:

Arctan interpolators

Getting a tighter bound or better approximation from the rational-function family requires at least a handful of parameters. An alternative approach is to find a one-parameter shape that fits better. We have found that the arctan shape with parameter b (like b0 in the one-parameter rational-function interpolator, corresponding to the k value at the midpoint, ) does a good job:

As Fig 8 shows, the shapes of the arctan interpolators are imperfect, but are much better than the first-order rational function, fitting better in some regions than in others. Note that both the N = 1 rational function and the arctan interpolators are symmetric about their centers (with log k as the independent variable); they are logistic sigmoid and Gudermannian shapes, respectively. The ideal that they are to bound or approximate, however, is not quite symmetric in log k. So a family of not-quite-symmetric interpolators can perhaps do better.

Several special b parameter values for the arctan interpolator can be derived to match each of the ideal properties mentioned above. We can constrain the approximation to pass through the known value ν(1) = log 2, with , but that does not give a bound. Or we match the true median at high k to within O(k−2) with . That also does not give a bound. Or we can match at low k to within O(2−1/k k2), like does, with , which yields a lower bound to g.

To find an upper bound to g (lower bound to ν), we decrease b until the margin is nonnegative for all k, which is at about b = 0.205282. Finding an analytic formulation for that tight bound is a challenge left to others.

See Fig 10 for the relative errors of various arctan interpolator versions.

thumbnail
Fig 10. Relative errors of arctan interpolations.

Relative errors of arctan-interpolated approximations (dash-dot curves) between the upper (dashed) and lower (solid) bounds. These are among the possibly interesting approximations suggested by the tight-bounds approach. The non-bounding approximations, optimized for different criteria, all have maximum relative errors below 1%.

https://doi.org/10.1371/journal.pone.0251626.g010

Conclusions

Tight upper and lower bounds to the median of the gamma distribution are conjectured, based on numerical and asymptotic analyses. The simplest conjectured lower bound is never below the 48th percentile, and the simplest conjectured upper bound, of the same form 2−1/k(A + Bk), is never above the 55th percentile, over the entire range of k > 0. Using arctan and rational-function interpolators between these closed-form bounds, two better one-parameter families of bounds and approximations to the median of a gamma distribution are proposed.

The one-parameter rational-function family has simple closed-form formulae for tightest upper and lower conjectured bounds, staying below 50.85 and above 49.69 percentile, respectively; higher-order rational functions can provide tighter bounds or better approximations.

The one-parameter arctan family of interpolators is a better fit to the ideal interpolator, and includes a version that is most accurate in the low-k tail and provides a closed-form tight conjectured upper bound, staying below 50.18 percentile. With different b parameter, several approximations in the family, including the closed-form version that is exact at k = 1, stay between 49.97 and 50.03 percentile. We have not found an analytic formula for the parameter that gives the tightest lower bound, which stays above 49.96 percentile, but have shown where to find it, graphically or numerically.

The approach of interpolating between tight bounds opens the way to finding tighter bounds and more accurate approximations, and to finding more such families of bounds and approximations via other interpolator forms.

While numerical and graphical techniques were used in finding these bounds, the ones with closed forms are grounded in asymptotic analysis. Proving that they are in fact bounds remains an unmet challenge.

Acknowledgments

The author gratefully acknowledges helpful comments on previous drafts from Google colleagues Pascal Getreuer, Srinivas Vasudevan, Dan Piponi, and Michael Keselman, and from outside experts D. Andrew Barry, José Antonio Adell, Milan Merkle, Christian Berg, and Henrik L. Pedersen. The anonymous reviewers were also very helpful.

References

  1. 1. Chen J, Rubin H. Bounds for the difference between median and mean of gamma and Poisson distributions. Statistics & Probability Letters. 1986;4(6):281–283.
  2. 2. Choi KP. On the medians of gamma distributions and an equation of Ramanujan. Proceedings of the American Mathematical Society. 1994;121(1):245–251.
  3. 3. Berg C, Pedersen HL. The Chen–Rubin conjecture in a continuous setting. Methods and Applications of Analysis. 2006;13(1):63–88.
  4. 4. Gaunt RE, Merkle M. On bounds for the mode and median of the generalized hyperbolic and related distributions. Journal of Mathematical Analysis and Applications. 2021;493(1):124508.
  5. 5. Berg C, Pedersen HL. Convexity of the median in the gamma distribution. Arkiv för Matematik. 2008;46(1):1–6.
  6. 6. Marsaglia JC. The incomplete gamma function and Ramanujan’s rational approximation to ex. Journal of Statistical Computation and Simulation. 1986;24(2):163–168.
  7. 7. Adell J, Jodrá P. On a Ramanujan equation connected with the median of the gamma distribution. Transactions of the American Mathematical Society. 2008;360(7):3631–3644.
  8. 8. You X. Approximation of the median of the gamma distribution. Journal of Number Theory. 2017;174:487–493.
  9. 9. Chen CP. The median of gamma distribution and a related Ramanujan sequence. The Ramanujan Journal. 2017;44(1):75–88.
  10. 10. Wilson EB, Hilferty MM. The distribution of chi-square. Proceedings of the National Academy of Sciences. 1931;17(12):684–688. pmid:16577411
  11. 11. Barry D, Parlange JY, Li L. Approximation for the exponential integral (Theis well function). Journal of Hydrology. 2000;227(1-4):287–291.
  12. 12. Steinbrecher G, Shaw WT. Quantile mechanics. European Journal of Applied Mathematics. 2008;19(2):87–112.