Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Modeling lot-size with time-dependent demand based on stochastic programming and case study of drug supply in Chile

  • Fernando Rojas,

    Roles Investigation, Methodology, Writing – review & editing

    Affiliations School of Nutrition and Dietetics/Center of Micro-Bio Innovation, Universidad de Valparaíso, Valparaíso, Chile, Business School, Universidad Adolfo Ibáñez, Santiago, Chile

  • Víctor Leiva ,

    Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    victorleivasanchez@gmail.com

    Affiliations School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile, Faculty of Administration, Accounting and Economics, Universidade Federal de Goiás, Goiânia, Brazil

  • Peter Wanke,

    Roles Conceptualization, Writing – review & editing

    Affiliation COPPEAD, Universidade Federal de Rio de Janeiro, Rio de Janeiro, Brazil

  • Camilo Lillo,

    Roles Software

    Affiliation Department of Computer Sciences, Universidad de Playa Ancha, Valparaíso, Chile

  • Jimena Pascual

    Roles Validation, Writing – review & editing

    Affiliation School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile

Abstract

The objective of this paper is to propose a lot-sizing methodology for an inventory system that faces time-dependent random demands and that seeks to minimize total cost as a function of order, purchase, holding and shortage costs. A two-stage stochastic programming framework is derived to optimize lot-sizing decisions over a time horizon. To this end, we simulate a demand time-series by using a generalized autoregressive moving average structure. The modeling includes covariates of the demand, which are used as predictors of this. We describe an algorithm that summarizes the methodology and we discuss its computational framework. A case study with unpublished real-world data is presented to illustrate the potential of this methodology. We report that the accuracy of the demand variance estimator improves when a temporal structure is considered, instead of assuming time-independent demand. The methodology is useful in decisions related to inventory logistics management when the demand shows patterns of temporal dependence.

1 Introduction and bibliographical review

The use of inventory models is important when managing a logistically efficient organization [13]. A typical objective for evaluating an inventory system is to minimize the total cost (TC), which is a function of purchase cost, ordering cost per lot (or setup), inventory holding cost, and shortage cost [4]. The inventory system should establish an economic order quantity (EOQ) or lot size to satisfy demand [5, 6]. The EOQ model often considers a single period of decision and assumes a constant rate of demand per unit time (DPUT). Note that this model can also consider multiple periods with more than one level for the decision stages, but in that case the DPUT rate is frequently non-constant. However, the multi-period EOQ model may conduct to an inventory cost greater than that obtained with single-period EOQ model [4].

[7] proposed a framework with dynamic DPUT to decide the optimal order quantity in multiple periods, which is known as the economic lot-sizing (ELS) model, and it is frequently studied in operational research. The ELS model works similarly to the EOQ model, but in a dynamic (and deterministic) setting, that is, over a multi-period planning horizon with non-constant DPUT [6].

Both EOQ and ELS models were conceived assuming deterministic frameworks. However, to formulate more realistic models, the use of a stochastic framework for the DPUT, and its implications for inventory planning, is needed [8]. Under stochasticity, DPUT is considered as a random variable with an associated probability (or statistical) distribution. We refer to this case as the probabilistic (stochastic) ELS inventory model. [9] introduced a probabilistic ELS model using the concept of costs with back-orders. Probabilistic ELS models with and without backlogging were proposed by [10] and [11], respectively. Inventory shortages induced by DPUT uncertainty in ELS models was treated by [12].

The optimization of probabilistic ELS models considers some stochastic elements related to the DPUT [4]. Then, stochastic programming (SP) can be used to solve the optimization problem associated with this probabilistic inventory model [13]. Particularly, two-stage optimization based on SP has been employed in probabilistic ELS and supply chain models; see details in [14] and [15]. More details on two-stage optimization based on SP are provided in Sections 3.1-3.2. [15] considered a two-stage SP and established service level and fill-rate constraints in its second stage, as well as allowed for budget constraints, an aspect often taken into account by the organizations. [16, 17], and [18] utilized two-stage SP to solve non-capacitated (with no budget constraints) ELS models. An evolution of different formulations of the ELS problem is summarized in Table 1, which can be complemented with the recent review provided by [19]. All the articles mentioned in this table assume DPUT to be a time-independent random variable. To the best of our knowledge, there are no papers that consider dependence structures over time for the DPUT in ELS models. We propose a way to find lot-sizes (order quantities) with time-dependent random DPUT based on the work by [15].

thumbnail
Table 1. Evolution of literature on ELS formulations and components considered in the mentioned reference.

https://doi.org/10.1371/journal.pone.0212768.t001

Since often DPUTs are time-dependent random variables [29], to describe the DPUT of the ELS model adequately, the modeling needs to consider the possible temporal dependence related to the multiple periods and levels of the decision stages for the inventory. This dependence can be modeled by autoregressive moving average (ARMA) time-series. ARMA models are widely flexible, easy to estimate and interpret, and their prediction is straightforward [30]. However, standard ARMA models have a linear structure and a normal (or Gaussian) distribution assumption for the model error. Therefore, standard ARMA models are highly restrictive. When non-normality is detected in the data under analysis, transformations for obtaining normality are frequently employed. Nevertheless, data transformation adds a problem for the interpretation of results. In regression models, this problem was solved in a more general framework of statistical modeling by [31] using generalized linear models (GLM). GLM are based on distributions of the exponential family, of which the normal model is a particular case. GLM do not assume a distribution for the model error, but for the response, and they allow for non-linear structures through a link function that relates the predictor to the model mean. [32] proposed a GLM version of ARMA models known as generalized ARMA (GARMA), which considers ARMA components by transforming the data mean through a link function as in GLM. Then, the past data can be used to improve the accuracy of different models in which statistical aspects are taken into account. In GARMA models, their systematic component allows us to formulate a function of the mean (the link function) by an additive structure of parametric functions, which include explanatory variables (covariates hereafter) and ARMA components. This gives more flexibility to the model formulation, providing the possibility of employing other types of non-linear associations, in addition to the corresponding ARMA framework. GARMA model parameters can be estimated via the maximum likelihood method, once the underlying distribution has been assumed. Often a normal distribution is considered in the modeling of DPUT [33], but other distributions might also be assumed; see, for example, [34]. ARMA and GARMA models are often used to predict future values [29, 32]. GARMA models may also be employed to estimate mean values and find the conditional probability density function to past data, such as it occurs with the DPUT when temporal dependence and covariates are present. This last aspect is of particular interest in probabilistic inventory models.

As DPUT can present temporal dependence, there exists a need to propose inventory models involving this dependence [30, 35]. Such a temporal structure may be added into the inventory costs as part of the objective function or into its constraints. Due to the stochastic nature of the serial dependence of the DPUT values to be modeled (in this case the conditional DPUT over time), SP must be employed in the optimization. Often a method named sample average approximation replaces the original SP problem with Monte Carlo sampling-based methods. Then, the objective function is constructed using scenarios that can be estimated with Monte Carlo sampling. This method assumes that the approximating problem is solved exactly [36]. The optimization may be carried out considering different scenarios for the DPUT, allowing us to generate simulated values (possible realizations or observations) of DPUT with these scenarios. Thus, the simulated values may be clustered with some distance measure [37]. When compared to the standard ELS model, using a two-stage SP approach with time-dependent DPUT can give more flexibility to adjust production or purchase quantities, as well as providing robustness against demand fluctuations [34, 38].

The main objective of this paper is to propose a lot-sizing methodology for an inventory system that faces time-dependent random demands and that seeks to minimize TC as a function of holding and shortage costs. The methodology uses several scenarios based on DPUT data with temporal dependence described by GARMA models. These scenarios consider simulated data which allow the inventory TC to be optimized with a two-stage SP approach. Note that, since the sample average approximation method requires Monte Carlo sampling for generating scenarios, it is necessary to have a random number generator for the GLM and GARMA structures. In the GLM case, there is a method available for this generation. However, for GARMA models, we need to develop a method to generate random numbers that follow this model, which is used only to simulate the data for the experiments. To the best of our knowledge, such a method does not exist in the literature. Thus, a secondary objective of this work is to derive this method.

The paper is organized as follows. Sections 2 and 3 introduce the methodologies which, when combined, allow us to achieve the main objective of the work. Section 2 finishes giving a novel generator of GARMA random numbers covering thus our secondary objective. In Section 4, we provide an algorithm which summarizes the proposed methodology and we describe the computational framework. Then, the numerical results of this article are presented in Section 5. First, a Monte Carlo simulation study is performed to compare the methodology when temporal dependence is present or not in the modeling of DPUT. Second, a case study with unpublished real-world data related to drugs supply in a Chilean hospital is conducted to illustrate the proposed methodology and to show its potential. In Section 6, conclusions on the results obtained in this study, as well as their limitations and future research, are discussed.

2 Statistical methodology

In this section, we introduce the statistical methodology utilized here to represent time-dependent random demands in the probabilistic ELS problem. Specifically, we discuss GLM and GARMA structures and then a novel result on generating random numbers that follow a GARMA model is presented. These results have been implemented computationally in a programming language of the R software; see details of this software in Subsection 4.2.

2.1 GLM framework

Let Y be a random variable related to the DPUT. In addition, consider that the distribution of Y belongs to the exponential family of statistical distributions, that is, its probability density function is expressed as (1) where ϑ, φ are canonical and scale parameters, respectively, RY is the support of Y and b, c are specific functions defining a particular member of the exponential family. Note that the mean and variance of Y can be expressed using first and second derivatives of the function b, as well as employing canonical and scale parameters, by E(Y) = b′(ϑ) and Var(Y) = φb″(ϑ), respectively. In GLM, the mean of Y is described by a systemic component associated with the link function g(μ) = η and with the values of r covariates x = (x0, x1, …, xr), with x0 = 1, where (2) with β = (β0, β1, …, βr) being the regression coefficients associated with x. Now, let Yt be the random variable of interest Y indexed over time t, with t = 1, …, n. Furthermore, consider the conditional distribution of Yt given the past data set (3) which is assumed to belong to the exponential family, so that the conditional probability density function according to Eq (1) is expressed as with the canonical parameter ϑt and values of covariates xt depending over time t, where the parameter φ is independent of t. We denote the conditional mean and variance of Yt given Ht by μt = E(Yt|Ht) = b′(ϑt) and Var(Yt|Ht) = φb″(ϑt), respectively, for t = 1, …, n.

2.2 GARMA models

The GLM framework, connected to the past data set Ht defined in Eq (3), can be formulated in terms of a GARMA model of p and q orders, denoted by GARMA(p, q), as (4) where ϕh and λj correspond to the h-th and j-th components of an ARMA(p, q) model, related to the autoregressive and moving average components, respectively, and β is given as in Eq (2) but now associated with the values of r covariates depending over time, denoted by xt = (x0, x1t, …, xrt), with x0 = 1. The link function g(μt) = ηt of the GARMA model given in Eq (5) can be, for example, the identity function (to represent linear association), the inverse function or the logarithmic –log– function (to represent non-linear association), whereas the corresponding model variance is assumed to be constant over time. In the case of the identity link function, we have that (5) Now, consider the martingale residual [32], υt = ytμt, which are uncorrelated and have marginal mean equal to zero. By expressing and based on Eq (5), we have (6) Considering lag operators (L) corresponding to the autoregressive and moving average components given by respectively, we may rewrite Eq (6) as where Ψ(L) = Λ(L)/Φ(L) = 1 + ψ1L1 + ψ2L2 + ⋯, assuming that Φ(L) is invertible. In this context, [32] demonstrated that the corresponding marginal mean and variance, E(Yt) and Var(Yt) namely, are defined by (7) where for all stationary time-series.

Remark 1 From Eq (7), note that the marginal mean E(Yt ) does no depend on past data. However, since the term Ψ(2)(L) ≥ 1, then Var(Yt) ≥ Var(Yt|Ht), that is, the conditional variance on past data is less than or equal to the marginal variance, which does not consider the temporal disposition of the observations. This is fundamental to improve the accuracy of the conditional variance based on past data [32].

Given n observations y1, …, yn of Yt, for t = 1, …, n, the corresponding likelihood function is constructed as the product of conditional probability density functions of Yt given the past data Ht. Thus, if θ = (β, ϑt, φ, ϕ, λ) is the vector of model parameters to be estimated, the associated log-likelihood function for θ is given by (8) To obtain the maximum likelihood estimate of θ, we must take derivatives of Eq (8) with respect to each parameter β, ϑt, φ, ϕ and λ. Inference (confidence intervals and hypothesis testing) about θ can be based on the asymptotic normality of the maximum likelihood estimator .

2.3 Model checking and diagnostic

The quantile residual is often used in GARMA models as a diagnostic tool to detect their adequacy to the data, which is defined as (9) where is the cumulative distribution function of Yt conditional to past data, is the maximum likelihood estimate of θ, and Φ−1 is the inverse cumulative distribution function of a standard normal distribution. Note that the quantile residual follows the standard normal distribution. For more details about this residual, see [39].

GARMA models have three elements: (i) the distribution of the response; (ii) the link function for the mean; and (iii) a linear predictor containing a set of model parameters, corresponding to regression coefficients associated with the covariates, as well as autoregressive and moving average coefficients. For a specific data set, the building process of a GARMA model consists of comparing several competing models based on different combinations of these elements. The deviance is used as an indicator to compare these models, which is minus two times the log-likelihood ratio of the reduced model (in our case GLM) and the full model (in our case the GARMA model). We can employ model selection tools, such as Akaike (AIC) and Bayesian (BIC) information criteria, to select the best GARMA model. AIC and BIC allow us to compare models by the expressions with being the log-likelihood function evaluated at and n, m being the sample size and number of model parameters, respectively. AIC and BIC correspond to a penalized log-likelihood function as the model has more parameters, making it more complex. A smaller AIC or BIC indicates a better model; for more details about deviance, AIC and BIC, see [40].

2.4 Generator of GARMA random numbers

Algorithm 1 introduces a novel generator of random numbers from a GARMA model. Note that random numbers following a GLM can be obtained by fixing the orders p = 0 and q = 0 in the GARMA model, that is, GLM ≡ GARMA(0, 0) model.

Algorithm 1 Random number generator from a GARMA model

1: Construct the GLM term of the GARMA model by fixing:

 1.1 A number r of covariates and the size sample n.

 1.2 Values for the regression coefficients β associated with the r covariates.

 1.3 The link function g.

2: Build the autoregressive term of the GARMA model by fixing:

 2.1 The dimension p of the autoregressive parameters ϕ, assuming p = 0 if a moving average model is of interest.

 2.2 Values for autoregressive coefficients ϕ.

3: Establish the moving average term of the GARMA model by fixing:

 3.1 The dimension q of the moving average parameters λ, assuming q = 0 if an autoregressive model is of interest.

 3.2 Values for moving average coefficients λ.

4: Assume a distribution for Yt within the exponential family and fix its parameter φ (note that by fixing φ we choose a specific member of the exponential family and then formulate the GARMA model).

5: Obtain values for the covariate matrix X = (xtk), where each of its k columns xk = (1, x1k, …, xnk) is randomly generated from the uniform distribution in [0, 1] using the Monte Carlo method.

6: Also using the Monte Carlo method, generate a time-series from a GARMA model with t = 1, …, n following the steps:

 6.1 For t = 1, generate a value of yt from , where is a member of the exponential family of parameter φ (as defined in Step 4) and , with g and β defined in Step 1 and xt in Eq (3) but numerically calculated from Step 5. Note that definition of μt is similar to the expression given in Eq (5) for p = 0 and q = 0 (with the identity link function).

 6.2 For each t from 2 to n:

  6.2.1 Obtain the autorregresive term using

  6.2.2 Calculate the moving average term employing

  6.2.3 Compute μt = g−1(ηt) as location parameter indexed over time t, with using at and bt defined in Steps 6.2.1-6.2.2, respectively, and ηt being analogous to expression given in Eq (4).

  6.2.4 Generate a value of yt similarly as in Step 6.1, but now μt is computed as in Step 6.2.3.

3 Stochastic programming methodology

In this section, we present the SP methodology to find the ELS in T periods of decision stages assuming a time-dependent random DPUT. This methodology has been implemented in the R software.

3.1 Stochastic programming formulation

Table 2 summarizes the elements of the two-stage probabilistic ELS model. In this table, both Zt and Qt are considered as first stage variables, while the variables It and St are considered as second stage variables. In the first stage, we decide whether or not to purchase and how much to purchase, whereas in the second stage, after observing the demand, we obtain inventory and shortage levels. The corresponding SP framework used to minimize the expected TC –E(TC)– of the inventory model can be formulated as [15] (10) subject to (11) where Ω is the set of selected possible demand scenarios and ω is a specific scenario, with a fixed number of scenarios in each period of the decision stages. The objective function defined in Eq (10) attains a solution that minimizes E(TC) over all scenarios. This minimization can be carried out through the addition of sharing cuts for feasibility and optimality at the resource function, whenever this function or its constraints contain stochastic coefficients that exhibit inter-stage dependency of ARMA type in a multi-stage problem [41]. The approach defined in Eq (10) can be reformulated using the L-shaped method, according to [42], as a master problem in each period t of the decision variables in two stages by means of (12) subject to ut QtZt Ct, where the decision variables of the first stage Zt and Qt are fixed momentarily until the problem of second stage is solved. Note that (13) defined in Eq (12) represents the objective function of the second stage (considered here as a subproblem), which is a function in each period t of the decision variables in the first stage, subject to (14) The subproblem defined in Eq (13), with its constrains given in Eq (14), have a dual form (15) subject to π1ht and π2st, where π1 and π2 are the dual variables related to these constrains. Next, we indicate how the cuts are generated and added to solve the problem using the L-shaped method. It is known that the optimal solution of a linear programming, if it exists, is attained at a vertex. Let and be a finite set of vertices related to π1 and π2, respectively. Therefore, the master and dual problems defined in Eqs (12) and (15), respectively, can be solved by γ2(Zt, Qt) = min{γ2}, subject to where the constraints are called cut tangent planes to the objective function at each point (Zt, Qt). This is a convex external approximation of the resource function, with γ2 varying freely. It is possible to solve this mixed integer linear programming defined in Eq (12), often written in short as MILP, by using the lpSolveAPI package of the R software.

3.2 Scenarios in two stages

One way of introducing randomness in SP problems is by generating a finite number of scenarios as follows. First, a distribution with estimated or known parameters is assumed, which approximates the true distribution of the DPUT. Second, a large number N of values for the DPUT (for example, N = 10000) is simulated. Third, a number S of scenarios must be established to represent the inherent variability of the DPUT (for example, S = 100). A specification of S is reached by clustering N values into S nodes, which correspond to the centroids of each cluster and are represented in a diagram with the associated probabilities over the nodes [37]. SP provides good solutions if the assumed distribution for the DPUT is adequate statistically.

Recall that, in the two-stage SP model, each level of a scenario tree is a decision stage. Consequently, a two-stage SP model corresponds to a scenario tree with only two levels in each time period. In the first stage of this model, as mentioned, the decision variables are whether to purchase or not, represented by the binary variable Zt, and which ELS must be purchased, represented by the variable Qt, both of which are decided before knowing the values of DPUT Yt. In the second stage, the decision variables are the inventory level It and the shortage level St, which are generated after knowing Yt. The representation of each scenario ω ∈ Ω is constructed as follows. In each period t, the primary node shows the first decision stage and the secondary nodes display the second decision stage with the values of Yt and their respective probabilities indicated over the nodes [43].

To obtain the values presented in each node and time t, as well as their respective probabilities, we utilize the values fitted from GLM or GARMA structures as seeds to simulate 1000 values of Yt. Here, GLM and GARMA parameters required for the simulation can be estimated by the maximum likelihood method using real-world data. These estimates are considered as constant over t. Then, the 1000 simulated values in each period t of the second decision stage are grouped employing non-hierarchical clustering. The analysis of non-hierarchical groups aims to find a clustering of objects. In our case, the objects are values of simulated demands, such that the separation of these groups or clusters is maximized while minimizing the distances within the group in relation to its average or centroid. We use the k-means method to assign simulated demands to a number of groups defined by the user (for example k = 100), which form the scenarios for the SP [44]. We occupy the Ward method [45] to group such that the Euclidean distances between the simulated elements to be clustered is minimized. For more details of non-hierarchical grouping, see [44]. In our case, the possible values to be considered in each node are the centroids of this non-hierarchical clustering, while the probabilities of each branch correspond to the proportion of grouped elements in each centroid with respect to the total of simulated values. Note that the simulated values from a GLM and GARMA structures can be obtained with Algorithm 1 presented in Section 2.

To reach a range of solutions from the SP proposed in Eqs (10) and (11), we apply the approach described above to cases of extreme demand percentiles. Note that considering this is a purely statistical issue, because the 5-th and 95-th percentiles can be used as extreme values of the distribution. In statistics, any value below 5% is considered small and above 95% is large. Thus, we intend to demonstrate that a more accurate range of variables in the first (Qt) and second (It, St) decisions, as well as E(TC), are obtained when comparing results of the SP proposed in Eqs (10) and (11) to describe the DPUT with GARMA vs. GLM. Specifically, we make this comparison by obtaining the solutions of Eqs (10) and (11), considering 5-th and 95-th percentiles of the distribution of Yt (described by GARMA and GLM), for t = 1, 2, 3. We denote the l-th percentile of the distribution of Yt by for the scenario ω in period t of the second decision stage. Then, and are the 5-th and 95-th percentiles, respectively, each in their respective scenario ω of the second decision stage in period t. Given that the distribution parameters are known, we can simulate values of these percentiles in each period t, clustering the data to obtain their value and probability in each node of the second stage; see details of this procedure in steps 3 and 4 of Algorithm 2, and of the generation of SP scenarios in [37].

3.3 Evaluation of SP by using out-of-sample scenarios

To evaluate the purchase plan of the ELS obtained from SP models based on generation of scenarios, it is possible to apply the optimization to a realistic framework of the rolling horizon using out-of-sample scenarios. To generate these scenarios, we use the same distribution and parameters considered in the scenario representation mentioned in Section 3.2. Note that the objective function of the SP does not provide the true cost to be incurred when implementing the solution in a real environment, with out-of-sample data within a rolling horizon framework. Then, we can contrast the value of a solution in stochastic scenarios with respect to a solution obtained in a deterministic scenario, comparing the percentage increase of the TC we pay for ignoring uncertainty [46]. This cost is computed as (DC − SC)/SC, where DC and SC are the TCs accumulated over each simulation run for the deterministic and stochastic models, respectively. Repeating this comparative experience in J instants, we obtain the average percentage cost increase of the deterministic and stochastic solutions (Δ) and its mean absolute deviation (MAD) as

4 Summary of methodological and computational aspects

In this section, we condense, in an algorithm, the proposed methodology constructed from Sections 2 and 3. Then, the computational framework used to implement this methodology is described.

4.1 Summary of the optimization methodology

To obtain the observed values of Yt, their probabilities , and E(TC), we summarize in Algorithm 2 the methodology used to get the inventory TC over T periods of the decision stages.

Algorithm 2 Summary of the methodology to optimize TC over T periods of decision stages

1: Collect data y1, …, yn of the response Y and data x1j, …, xnj of the covariate Xj, with j = 1, …, r.

2: Perform an exploratory data analysis according to the following steps:

  2.1 Compute descriptive statistics for y1, …, yn to identify the type of data distribution. If evidence of symmetry exists, use a normal distribution. Else, employ an asymmetric distribution such as the gamma model. In both cases, continue with Step 2.2.

 2.2 Detect autocorrelation for y1, …, yn using the autocorrelation (ACF) and partial autocorrelation (PACF) functions. If evidence of autocorrelation exists, utilize an ARMA model of adequate orders p, q and continue with Step 2.3. Else, assume other models.

 2.3 Construct scatter-plots between Y and each covariate Xj to obtain the kind of association to be considered in the modeling. If evidence of correlation between Y and some covariate Xj exists, employ regression models and continue with step 3. Else, do not consider the covariate(s).

3: Formulate GARMA models and estimate their parameters according to the following steps:

 3.1 Consider a suitable link function for the GARMA model based on step 2.3.

 3.2 Estimate the GARMA parameters (θ) using the maximum likelihood method.

 3.3 Select the best GARMA model considering the smallest value for an information criteria (AIC or BIC) and the deviance, producing the fitted model with its corresponding values .

 3.4 Obtain the l × 100-th percentile, yl×100 namely, of the distribution of Y using the estimated parameters θ, which are calculated in step 3.2.

4: Generate cluster scenarios carrying out a “for cycle” from t = 1 to t = T periods of the decision stages following the steps:

 4.1 Fix l = l1 at a small value (for example, l1 × 100 = 5) and simulate n data of the distribution of Y using the Monte Carlo method, with mean yl1×100 and estimated parameters .

 4.2 Conduct a cluster analysis on with a number of 100 scenarios conformed for the clusters and obtain and , for ω = 1, …, 2t, where is the ω-th cluster centroid (scenario), and is the probability of occurrence of this scenario.

 4.3 Fix l = l2 at a large value (for example, l2 × 100 = 95) and simulate n data of the distribution of Y using once again the Monte Carlo method, with mean y(l2 × 100) and estimated parameters .

 4.4 Conduct a cluster analysis on with a number of 100 scenarios conformed for the clusters and obtain and , for ω = 1, …, 100, where is the ω-th cluster centroid (scenario), and is the probability of occurrence of this scenario.

5: Set values for the components ut, ot, ht and st of the inventory model given in Eq (10) and the components Ct and I0 of the constrains given in Eq (11).

6: Optimize the model given in Eq (10) to obtain Zt, Qt, and , denoted by , , and , respectively.

7: Establish the optimum inventory TC as .

4.2 Computational framework

We implement the methodology in the R software, a non-commercial and open source package for statistics and graphs which can be secured from www.r-project.org. The R software is currently very popular in the international scientific community. For an application of R in inventory models; see [47]. Some R packages related to statistical distributions that may be useful in inventory models are available in CRAN.R-project.org; see, for example, [40] and [48]. Specifically, we utilize the base package for descriptive statistics, the gamlss.util package to perform a statistical analysis on DPUT values and time-series analysis for GARMA models. We use the RcmdrMisc package for cluster analysis in the setting of inventory models, and the lpSolveAPI package to solve the linear programming with the L-shaped method based on Eq (10), which can also be solved by using several commercial software packages as LINGO and CPLEX. Note that by occupying 100 scenarios of DPUT in three periods of the second decision stage, the processing time is 30 seconds in a computer with the features detailed in Table 3.

thumbnail
Table 3. Characteristics of the computer used in the simulations.

https://doi.org/10.1371/journal.pone.0212768.t003

5 Numerical results

In this section, we first conduct a Monte Carlo simulation study, which allows us to compare the performance of ELS and its inventory TC for different methods when describing DPUT based on GARMA and GLM structures. Second, real-world DPUT data from a drug supply case study in a Chilean public hospital are analyzed with the proposed methodology.

5.1 Simulation study

To carry out the aforementioned simulation study, we consider: (i) two samples, one of time-dependent DPUT and time-independent DPUT, both using Algorithm 1, are generated; (ii) a range of SP solutions obtained using extreme demand percentiles (5-th and 95-th) are obtained for both decision variables and TCs; and (iii) inventory TC for the ELS model are evaluated by Algorithm 2. We obtain the TC over T = 3 periods of the decision stages and n = 100 by employing GARMA and GLM structures for simulated data with temporal dependence, based on normal and gamma distributions, different link functions and diverse values for standard deviations (SDs). We describe the time-dependent DPUT mean considering the GARMA(1, 1) structure given by (16) In addition, as comparison for time-independent DPUT, we use the GLM defined as (17) In both models, we employ link functions: g ≡ {identity, log}. As mentioned in Algorithm 2, we assume that the values xt of the covariate Xt are obtained from a uniform distribution in the interval [0, 1]. The values assumed for ϕ and θ in Eq (16) are ϕ = 0.5 and θ = 0.25. The number of Monte Carlo replications are 10000. In each of these replications: (i) we obtain the observations y = (y1, …, y100) for the normal and gamma distributions with temporal dependence by using Algorithm 1; and (ii) we fit GARMA and GLM structures by estimating their parameters and obtaining fitted values of Eqs (16) and (17), respectively. These fitted values are computed for different fixed values of the corresponding distribution SD, σ namely, and regression coefficients β0, β1. To compare different distributions, we set diverse configurations for GARMA and GLM structures; see Table 4.

thumbnail
Table 4. Distributions and true parameters used in the simulation study.

https://doi.org/10.1371/journal.pone.0212768.t004

We obtain the 5-th and 95-th percentiles of the DPUT distribution for each period of the decision stages under analysis, by using the quantile function implemented in the gamlss package with qGA and qNO commands for gamma and normal distributions, respectively. For each of the replications, we consider three periods by using three consecutive fitted values as mean of the distributions in each period. The SD is assumed to be constant in each period of the decision stages and estimated by GARMA and GLM structures. Subsequently, by applying Algorithm 2 (from steps 4 to 7): (i) we generate scenarios using simulation with a sample size 1000 in each node; and (ii) we obtain the ELS with its TC over all periods of the decision stages. Fixed values for parameters in all periods of the decision stages are reported in Table 5.

thumbnail
Table 5. Inventory model parameters for t = 1, 2, 3 periods of the decision stages.

https://doi.org/10.1371/journal.pone.0212768.t005

Through this simulation study, as mentioned, we evaluate for two DPUT distributions and two link functions. We are interested in describing the empirical distribution of (for reasons of space, we omit the analysis of the decision variables of first and second stage). The results on the statistical properties of the maximum likelihood estimators , , and are not provided here, but these properties can be found in the simulation study presented in [32]. Through this study, we intend to corroborate the differences between the results of the when describing the DPUT with GLM and GARMA structures. On the one hand, we make explicit that the variability of these results, reflected in the range of values of the obtained solutions given the extreme demand percentiles under consideration, should always be smaller in the GARMA model than in the GLM, which would provide more accurate results with our proposal. On the other hand, we compare the medians of using the Friedman test, with both models for the DPUT and link functions being considered. Table 6 reports the mean, SD, coefficients of skewness (CS), kurtosis (CK), median, and Friedman p-value of for the indicated distribution and link function. Note that, when the parameter σ (related to the SD of the DPUT distribution) increases, the range of obtained from the 5-th and 95-th percentiles is smaller in the GARMA model than in GLM, as indicated in Remark 1. Observe that under time-independent DPUT is greater than in the case of time-dependent random DPUT, which is explained by higher inventory levels to cover a higher DPUT uncertainty [49]. Then, the conditional DPUT variance obtained from GARMA models under a normal distribution and identity link is smaller than the marginal DPUT variance generated from GLM, which leads to smaller values of . (Similar results are obtained for models with a logarithmic link function.) Notice that for the case of the gamma distribution presents the largest SD. With respect to the SD, note that, if the DPUT presents temporal dependence, then the SD of the GARMA model is smaller than in GLM, for any distribution and link function considered. However, when a non-identity link function is used, these values are similar. In this simulation study, we find that not only the DPUT variance estimate is smaller when employing GARMA models under temporal dependence, but also the estimator of min{E(TC)} defined in Eq (10) has a smaller variance. Therefore, the best accuracy is obtained when we fit a GARMA model using temporal dependence of the DPUT instead of inventory models obtained under time-independent DPUT.

thumbnail
Table 6. Empirical mean, SD, CS, CK, median, and Friedman p-value of the for the indicated distribution, link, σ and percentile.

https://doi.org/10.1371/journal.pone.0212768.t006

Following [46], to evaluate the performance of our SP model, we compare the TCs obtained in 10 simulated instances of the indicated distribution, link function, σ and percentiles with respect to a deterministic solution of out-of-sample scenarios generated from same parameters and distributions. The results of average increase of TCs (Δ and MAD) are reported in Table 7. Note that, in all cases, the performance of the SP models in terms of Δ and MAD, when modeling the DPUT via GLM or GARMA estructures, are greater than when considering a deterministic scenario, which does not consider uncertainty. In all tested instances, savings are obtained in TC, since Δ = MAD. In general, the performance decreases as σ increases, independent of the distribution and link function considered. In general, for the 95-th percentile of DPUT, performance in terms of Δ and MAD is slightly better than when considering the 5-th percentile, while the results are similar when comparing the link function. This performance is better in the normal distribution than in the gamma distribution and with the GARMA model, independent of the link function considered.

thumbnail
Table 7. Evaluation of out-of-sample scenarios for the indicated distribution, link, σ and percentile with simulated data and the mentioned performance indicator.

https://doi.org/10.1371/journal.pone.0212768.t007

5.2 Empirical illustration

First, we describe the problem and data set. Then, we provide an exploratory data analysis, verify the assumptions of the DPUT distribution and estimate the inventory model parameters.

The drug supply in pharmacy units of Chilean primary healthcare centers is channeled through their central warehouse, which is provided by an external supplier. The warehouse acts as an intermediary between suppliers and output units, whereas these output units receive the demand for drugs, including its own pharmacy, which dispenses prescriptions to patients. The central warehouse needs to store, conserve and distribute such drugs, delivering the products monthly to all output units by using aggregated demand requirements for each of them in the same periods of the decision stages.

To validate the proposed methodology, we use real-world monthly demand (Y) data of a pharmaceutical product correlated with the known demand of other pharmaceutical product (X). The statistical correlation between demands of these two products is often detected, because frequently more than one drug is used to treat the same disease. Thus, in the GARMA model, the DPUT of a product can co-vary as a predictor of the DPUT of other product. These products are shipped from the warehouse and delivered to a healthcare center, located at the city of Concon, Chile. This case study is based on unpublished real-world data collected during 48 months in 2012-2015 (from 01-January 1 to 31-December) for evaluating supply policies. The data set is displayed in Table 8.

thumbnail
Table 8. Data set of monthly DPUTs of two pharmaceutical products.

https://doi.org/10.1371/journal.pone.0212768.t008

We perform an exploratory data analysis based on Table 9 and Fig 1. Table 9 reports the sample values of DPUT corresponding to: the mean (), minimum (y(1)), maximum (y(48)), inter quartile range (IRQ), 1st quartile (y(12)), median (y(24)), 3rd quartile (y(36)), SD, CS, CK and coefficient of variation (). Fig 1 shows the histogram, standard box plot, box plot adjusted for asymmetric data [50] of DPUT, index-plot of monthly DPUT, scatter-plot between DPUT and DPUT of other correlated products, and ACF/PACF plots of DPUT data. We confirm the stationary nature of the time series with p = 2 using the Dickey-Fuller augmented test (results omitted here). From Table 9 and Fig 1(A) and 1(B), note that the median and mean are similar, the CS and CK are close to zero and three, respectively, indicating normality in the DPUT data, which is supported by the histogram and box plots. Fig 1(C) shows a decreasing trend in the DPUT for the studied period. From the scatter-plot presented in Fig 1(D), it is not easy to identity a suitable link function. Then, we use the logarithmic and identity link functions and compare them. Fig 1(E) and 1(F) provides evidence about seasonality, which can be modeled by an autoregressive component, whereas the component of moving average is not identified by these plots. In summary, from this exploratory data analysis, we assume a GARMA(p, q) model, with p = {0, 1, 2} and q = {0, 1} using a normal distribution for the DPUT and considering a covariate with different link functions for the mean response. To select the best GARMA model, we employ AIC, BIC and deviance, which are reported in Table 10. Note that, as mentioned in Subsection 4.2, GARMA(0, 0) model ≡ GLM.

thumbnail
Fig 1. Histogram (A), box plots (B) and index-plot (C) of monthly DPUT; scatter-plot between DPUT and DPUT of the correlated pharmaceutical product (D); ACF (E) and PACF (F) plots of DPUT.

https://doi.org/10.1371/journal.pone.0212768.g001

thumbnail
Table 9. Descriptive statistics of monthly DPUT for the pharmaceutical product.

https://doi.org/10.1371/journal.pone.0212768.t009

thumbnail
Table 10. Criterion and deviance for different GARMA models with DPUT data of the pharmaceutical product.

https://doi.org/10.1371/journal.pone.0212768.t010

From Table 10, note that the smallest AIC, BIC and deviance correspond to the GARMA(2, 0) model, which corroborated our conjecture from the exploratory data analysis. Therefore, to model the DPUT, we propose the GARMA model given by (18) where β0 and β1 are the regression coefficients, xj is the value of the covariate X, and φ1, φ2 are the autoregressive coefficients. We fit the GARMA model by using the garmaFit command. The maximum likelihood estimates of the model parameters given in Eq (18), with approximate estimated standard errors in parenthesis, are: , , , and . All coefficients are significant at 10%. This conducts to the predictive model expressed as To confirm the correct fit of the proposed GARMA(2, 0) model in Eq (18), we use the quantile residual (rj) defined in Eq (9) for DPUT data. In addition, we employ a theoretical probability versus empirical probability (PP) plot to do this evaluation. Note that the PP plot can be linked to the Kolmogorov-Smirnov (KS) test by means of which acceptance bands may be constructed inside of this plot. Fig 2(A) sketches a PP plot with 95% acceptance bands to verify the distributional assumption of the model given in Eq (18). Observe that the KS p-value is 0.9991, which strongly supports the normality assumption of the quantile residuals obtained from the GARMA(2, 0) model. Fig 2(B) displays an index-plot of this residual, from which no unusual features, such as neither outliers nor heterogeneity, are detected. Therefore, from Fig 2, the assumption that the response follows a normal distribution seems to be quite suitable.

thumbnail
Fig 2. PP plot with 95% acceptance bands (A) and index-plot of the quantile residual (B) with DPUT data.

https://doi.org/10.1371/journal.pone.0212768.g002

Once the statistical model has been identified and estimated based on the available DPUT data, the two-stage SP must be conducted to obtain the values of two sets of decision variables. In its first stage, this indicates both whether the binary variable Zt takes the value one (a purchase must be carried out) in the period t or the value zero for Zt, and if it corresponds, the ELS Qt to be purchased in this period t. In its second stage, once the DPUT fitted values are obtained, these values are organized in a scenario representation of two nodes for each period t = 1, 2, 3. To demonstrate how accurate the ELS inventory model is, depending on GLM or the GARMA(2, 0) model, we generate scenarios of 5-th and 95-th percentiles of the DPUTs () and their probabilities () by using sequential simulation according to [37]. The parameters employed for this simulation are: (i) holding cost of each period (ht) of 0.0035 USD$/(month per unit); (ii) unitary purchase cost in each period (ut) of 0.6 USD$/unit; (iii) shortage cost in each period of 0.33 USD$/(shortage unit); and (iv) order cost in each period of 0.86 USD$/order. We compare outcomes of SP for the ELS with inventory shortage in two stages, with and without temporal structure, which is reported in Table 11. Note that the results for E(TC) and quantities Qt to order in each period are smaller and more accurate when the DPUT is described by a GARMA(2, 0) model than by GLM. This is because the values of obtained with a GARMA model are smaller and have higher accuracy than using GLM.

thumbnail
Table 11. Effect of scenarios of DPUT percentiles using the indicated model on SP elements in two stages with DPUT data of the pharmaceutical product.

https://doi.org/10.1371/journal.pone.0212768.t011

To evaluate the performance of SP with respect to deterministic out-of-sample scenarios, we divide our data set into two parts. We use 24 of 36 observations to estimate the parameters of GARMA and GLM structures. Then, we make predictions for 12 periods of future decision stages, as expressed previously, and solve the SP as indicated in Algorithm 2. The remaining 12 observations of the sample are used to compare the behavior of the stochastic solutions with respect to the deterministic solutions. The results of average increase of TCs (Δ and MAD) are reported in Table 12. These results are consistent with those found in our simulation study, since they indicate that the performance of SP obtained by modeling demand with GARMA model are better than when the uncertainty of this variable is not considered.

thumbnail
Table 12. Evaluation of out-of-sample scenarios for the case study according to the mentioned performance indicator.

https://doi.org/10.1371/journal.pone.0212768.t012

6 Conclusions, limitations and future research

This research proposed a methodology to solve a probabilistic ELS problem under time-dependent demand. A two-stage SP approach and a GARMA model for generation of scenarios were considered. Our results reported that scenarios with temporal dependence provide more accurate estimates for lot-sizing and smaller amounts of stored and shortage items in the different periods of the decision stages, when compared to time-independent DPUT, represented by GLM. The methodology proposed in this work showed an interesting approach to achieve savings in inventory total costs, assuming the variability of DPUT scenarios linked to time-series. This approach improved the existing results, preventing both unnecessary stock-outs and inventory holding. The approach offered the advantage of considering a more realistic and accurate situation of the DPUT. The proposed ELS policy is useful in organizations that have a single supplier to meet their requirements and that are generally characterized by high bureaucracy in their administrative systems. This is the case of public hospitals, where such a behavior in pharmaceutical product DPUT is frequent, and therefore, the supply system can be facilitated [51]. GARMA models give the possibility of achieving an efficient characterization of the mean, as well as the adjustment of other parameters, which may be used in probabilistic ELS problems. Although we employ a normal distribution in this paper, the proposed methodology is valid for any distribution that may be parameterized with respect to the conditional mean of a time-series model, considering linear and non-linear link functions for describing the covariates. On the one hand, this parametrization provides a basis for generating useful scenarios in SP. On the other hand, forecasting to future values based on GARMA models also gives the possibility of evaluating the quality of the prediction.

Note that the out-of-sample scenarios used in the simulation correspond to values based on the same parameters and statistical distribution under consideration. However, the out-of-sample scenarios used in the empirical illustration correspond to estimated values based on the real data under analysis. In both simulation and empirical studies, our results of SP confirm an important improvement in the function of total costs. These differences are slightly higher when considering the modeling of the demand through a GARMA model than when using a GLM.

Some limitations of the present research are that we do not consider: (i) random lead times nor restrictions on the fill-rate or service levels; and (ii) multiple products. When there is access to data related to time-dependent demand for more than a single product, the methodology proposed in this study could be extended to multiple products. These limitations generate opportunities for new approaches that take into account such aspects in two-stage SP. Furthermore, as future research, from the point of view of inventory models, we can employ approaches which study restocking cycles with scenarios of random lead time. Also, the use of chance constrains to consider fill-rate or service level may be assumed. From the point of view of statistics, it is possible to improve the statistical modeling if distributions different than the normal model are considered. Furthermore, given that in many products the DPUT in each cycle can be zero, a model employing mixture probability distributions for continuous and/or discrete data, called zero-inflated, may be considered [48]. Also, since outliers are known to be harmful for the estimates of statistical model parameters, diagnostic tools can be considered for the approach proposed in the present work. Even multivariate versions of the statistical models may be assumed to improve the accuracy of the proposed approach [38, 5254].

Acknowledgments

The authors thank the editors and reviewers for their constructive comments on an earlier version of this manuscript which resulted in this improved version. This research was supported partially by fellowship “Becas-Conicyt” (F. Rojas) and grant “Fondecyt 1160868” (V. Leiva), both from the National Commission for Scientific and Technological Research of Chile.

References

  1. 1. Kogan K, Tell H. Production smoothing by balancing capacity utilization and advance orders. Journal of Business Logistics. 2009;41:223–231.
  2. 2. Leung NHZ, Chen A, Yadav P, Gallien J. The impact of inventory management on stock-outs of essential drugs in Sub-Saharan Africa: secondary analysis of a field experiment in Zambia. PLOS ONE. 2016;11:e0156026. pmid:27227412
  3. 3. Jing F, Lan Z. Forecast horizon of multi-item dynamic lot size model with perishable inventory. PLOS ONE. 2017;12:e0187725. pmid:29125856
  4. 4. Tersine RJ. Principles of Inventory and Materials Management. New Jersey, US: Prentice-Hall; 1994.
  5. 5. Harris FW. How many parts to make at once. Factory, the Magazine of Management. 1913;10:135–136.
  6. 6. Hillier F, Lieberman G. Introduction to Operational Research. New York, US: McGraw Hill; 2005.
  7. 7. Wagner HM, Whitin TM. Dynamic version of the economic lot size model. Management Science. 1958;5:89–96.
  8. 8. Wanke P, Ewbank H, Leiva V, Rojas F. Inventory management for new products with triangularly distributed demand and lead-time. Computers and Operations Research. 2016;69:97–108.
  9. 9. Karlin S. Dynamic inventory policy with varying stochastic demands. Management Science. 1960;6:231–258.
  10. 10. Evans JR. An efficient implementation of the Wagner-Whitin algorithm for dynamic lot-sizing. Journal of Operations Management. 1985;5:229–235.
  11. 11. Pochet Y, Wolsey LA. Lot-size models with backlogging: strong reformulations and cutting planes. Mathematical Programming. 1988;40:317–335.
  12. 12. Bookbinder JH, Tan JY. Strategies for the Probabilistic Lot-sizing Problem with Service-level Constraints. Management Science. 1988;34:1096–1108.
  13. 13. Shapiro A, Dentcheva D, Ruszczynski A. Lectures on Stochastic Programming: Modeling and Theory. vol. 16. Philadelphia, US: SIAM; 2014.
  14. 14. Mulvey JM, Vanderbei RJ, Zenios SA. Robust optimization of large-scale systems. Operations Research. 1995;43:264–281.
  15. 15. Raa B, Aghezzaf H. A robust dynamic planning strategy for lot-sizing problems with stochastic demands. Journal of Intelligent Manufacturing. 2005;16:207–213.
  16. 16. Yuli Z, Shiji S, Cheng W, Wenjun Y. Dynamic programming and heuristic for stochastic uncapacitated lot-sizing problems with incremental quantity discount. Mathematical Problems in Engineering. 2012;Article ID 582323:1–21.
  17. 17. Zhou Z, Guan Y. Two-stage stochastic lot-sizing problem under cost uncertainty. Annals of Operations Research. 2013;209:207–230.
  18. 18. Zhang Y, Shen ZM, Song S. Distributionally robust optimization of two-stage lot-sizing problems. Production and Operations Management. 2016;25:2116–2131.
  19. 19. Brahimi N, Absi N, Dauzère-Pérès S, Nordli A. Single-item dynamic lot-sizing problems: An updated survey. European Journal of Operational Research. 2017;263:838–863.
  20. 20. Orlicky JA. Material Requirements Planning: the New Way of Life in Production and Inventory Management. New York, US: McGraw-Hill; 1975.
  21. 21. Guide VDR, Srivastava R. A review of techniques for buffering against uncertainty with MRP systems. Production Planning and Control. 2000;11:223–233.
  22. 22. Tarim SA, Kingsman BG. The stochastic dynamic production/inventory lot-sizing problem with service-level constraints. International Journal of Production Economics. 2004;88:105–119.
  23. 23. Li H, Meissner J. Capacitated dynamic lot sizing with capacity acquisition. International Journal of Production Research. 2011;49:4945–4963.
  24. 24. Özen U, Doğru MK, Tarim SA. Static-dynamic uncertainty strategy for a single-item stochastic inventory control problem. Omega. 2012;40:348–357.
  25. 25. Louly MA, Dolgui A. Optimal MRP parameters for a single item inventory with random replenishment lead time, POQ policy and service level constraint. International Journal of Production Economics. 2013;143:35–40.
  26. 26. Baki MF, Chaouch BA, Abdul-Kader W. A heuristic solution procedure for the dynamic lot sizing problem with remanufacturing and product recovery. Computers and Operations Research. 2014;43:225–236.
  27. 27. Tunc H, Kilic OA, Tarim SA, Eksioglu B. A reformulation for the stochastic lot sizing problem with service-level constraints. Operations Research Letters. 2014;42:161–165.
  28. 28. Tempelmeier H, Hilger T. Linear programming models for a stochastic dynamic capacitated lot sizing problem. Computers and Operations Research. 2015;59:119–125.
  29. 29. Calfa BA. Data Analytics Methods for Enterprise-wide Optimization under Uncertainty [Doctoral dissertation]. Carnegie Mellon University Pittsburgh; 2015.
  30. 30. Gilbert K. An ARIMA supply chain model. Management Science. 2005;51:305–310.
  31. 31. McCullagh P, Nelder JA. Generalized Linear Models. London, UK: Chapman and Hall; 1983.
  32. 32. Benjamin MA, Rigby RA, Stasinopoulos DM. Generalized autoregressive moving average models. Journal of the American Statistical Association. 2003;98:214–223.
  33. 33. Box GEP, Jenkins GM, Reinsel GC, Ljung GM. Time Series Analysis: Forecasting and Control. New Jersey, US: Wiley; 2015.
  34. 34. Wanke P, Leiva V. Exploring the potential use of the Birnbaum-Saunders distribution in inventory management. Mathematical Problems in Engineering. 2015;Article ID 827246:1–9.
  35. 35. Williams W, Peters M, Raiszadeh M. Time-dependent demand in requirements planning: an exploratory assessment of the effects of serially correlated demand sequences on lot-sizing performance. Journal of Operations Management. 1985;6:69–85.
  36. 36. Homem-de Mello T, Bayraksan G. Monte Carlo sampling-based methods for stochastic optimization. Surveys in Operations Research and Management Science. 2014;19:56–85.
  37. 37. Gülpinar N, Rustem B, Settergren R. Simulation and optimization approaches to scenario tree generation. Journal of Economic Dynamics and Control. 2004;28:1291–1315.
  38. 38. Rojas F, Leiva V. Inventory management in food companies with demand statistically dependent. ARLA: Academia Revista Latinoamericana de Administración. 2016;29:450–485.
  39. 39. Dunn P, Smyth G. Randomized quantile residuals. Journal of Computational and Graphical Statistics. 1996;5:236–244.
  40. 40. Stasinopoulos D, Rigby R. Generalized additive models for location, scale and shape (GAMLSS). Journal of Statistical Software. 2007;23:1–46.
  41. 41. Infanger G, Morton DP. Cut sharing for multistage stochastic linear programs with interstage dependency. Mathematical Programming. 1996;75:241–256.
  42. 42. Birge JR, Louveaux F. Introduction to Stochastic Programming. New York, US: Springer; 2011.
  43. 43. Dupačová J, Consigli G, Wallace SW. Scenarios for multistage stochastic programs. Annals of Operations Research. 2000;100:25–53.
  44. 44. Jain AK. Data clustering: 50 years beyond K-means. Pattern recognition letters. 2010;31:651–666.
  45. 45. de Amorim RC. Feature relevance in Ward’s hierarchical clustering using the Lp norm. Journal of Classification. 2015;32:46–62.
  46. 46. Brandimarte P. Multi-item capacitated lot-sizing with demand uncertainty. International Journal of Production Research. 2006;44:2997–3022.
  47. 47. Rojas F, Leiva V, Wanke P, Marchant C. Optimization of contribution margins in food services by modeling independent component demand. Revista Colombiana de Estadística. 2015;38:1–30.
  48. 48. Leiva V, Santos-Neto M, Cysneiros FJA, Barros M. A methodology for stochastic inventory models based on a zero-adjusted Birnbaum-Saunders distribution. Applied Stochastic Models in Business and Industry. 2016;32:74–89.
  49. 49. Yao Y, Evers P, Dresner M. Supply chain integration in vendor-managed inventory. Decision Support Systems. 2007;43:663–674.
  50. 50. Rousseeuw PJ, Croux C, Todorov V, Ruckstuhl A, Salibian-Barrera M, Verbeke T, et al. robustbase: Basic robust statistics; 2016. Available from: http://CRAN.R-project.org/package=robustbase.
  51. 51. Birkett DJ, Mitchell AS, McManus P. A cost-effectiveness approach to drug subsidy and pricing in Australia. Health Affairs. 2001;20:104–114. pmid:11585158
  52. 52. Marchant C, Leiva V, Cysneiros FJA. A multivariate log-linear model for Birnbaum-Saunders distributions. IEEE Transactions on Reliability. 2016;65:816–827.
  53. 53. Marchant C, Leiva V, Cysneiros FJA, Vivanco JF. Diagnostics in multivariate generalized Birnbaum-Saunders regression models. Journal of Applied Statistics. 2016;43:2829–2849.
  54. 54. Marchant C, Leiva V, Cysneiros FJA, Liu S. Robust multivariate control charts based on Birnbaum-Saunders distributions. Journal of Statistical Computation and Simulation. 2018;88:182–202.