Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

An adaptive random search for short term generation scheduling with network constraints

An adaptive random search for short term generation scheduling with network constraints

  • J. A. Marmolejo, 
  • Jonás Velasco, 
  • Héctor J. Selley
PLOS
x

Abstract

This paper presents an adaptive random search approach to address a short term generation scheduling with network constraints, which determines the startup and shutdown schedules of thermal units over a given planning horizon. In this model, we consider the transmission network through capacity limits and line losses. The mathematical model is stated in the form of a Mixed Integer Non Linear Problem with binary variables. The proposed heuristic is a population-based method that generates a set of new potential solutions via a random search strategy. The random search is based on the Markov Chain Monte Carlo method. The main key of the proposed method is that the noise level of the random search is adaptively controlled in order to exploring and exploiting the entire search space. In order to improve the solutions, we consider coupling a local search into random search process. Several test systems are presented to evaluate the performance of the proposed heuristic. We use a commercial optimizer to compare the quality of the solutions provided by the proposed method. The solution of the proposed algorithm showed a significant reduction in computational effort with respect to the full-scale outer approximation commercial solver. Numerical results show the potential and robustness of our approach.

Introduction

A definition of Short Term Generation Scheduling (Unit commitment) used in Smart grid operations is a scheduling problem with two objectives: The power dispatch that requires distributing the system load to the generating units over a given time horizon and the start-up and shutdown schedules of the power output generators.

The planning horizon in the short term usually lasts from 24 to 168 hours (see for instance [1]). Additionally, the Short Term Generation Scheduling solves a very large-scale, time-varying, non-convex, mixed-integer optimization problem. For these reasons, this problem belongs to the set of NP-hard problems.

The Short Term Generation Scheduling (STGS) with network constraints in an NP-hard Mixed Integer Non Linear Problem. We readily know that exacts methods are inefficient for large power systems (see [25]). Because of this, we present an alternative for generating good solutions in short computing time based on an Adaptive Random Search strategy.

One advantage when the Lagrangian relaxation is used is to eliminate constraints that complicate the structure of the original problem. Because in our work we use Lagrangian relaxation, the model becomes drastically simplified and this allows to use the AGS algorithm in a natural way.

Lagrangian relaxation (LR) is the most widely used method to solve STGS by relaxing some constraints [6, 7]. This method is based on the duality theory, and tries to find optimal dual variables that maximize the dual Lagrangian.

A way to effectively implement LR is to apply “divide and conquer principle”, that is, to decompose the STGS into a master problem and several manageable subproblems to be solved separately. The sub-problems are linked by Lagrange multipliers that are added to the master problem to yield a dual problem. The dimensionality of such dual problem makes it easier to solve than the primal. The multipliers are updated through different techniques. A popular choice is the use of any subgradient method, but the mayor difficulty associated with this option is the feasibility of the initial solution. This phenomenon is due to the dual nature of the algorithm.

The commitment states obtained in LR generally are infeasible because STGS is a MINLP (Mixed Integer Non-Linear Programming) problem. The duality gap is an inherent disadvantage of this technique, i.e. the dual solution may be far away from the optimal solution (see [8]). The feasible commitment states are obtained after adjustment. Usually heuristic methods are needed to modify the dual solution obtained for LR into a feasible solution. The ways to obtain these feasible solutions may vary notably.

Moreover, the most used random search algorithms to solve STGS are Evolutionary Programming (EP) [9, 10], Genetic Algorithm (GA) [11, 12], Simulated Annealing (SA), [13, 14] Particle Swarm Optimization (PSO) [15] and Tabu Search (TS) [16]. Genetic algorithm and particle swarm optimization have become increasingly popular in recent years in science and engineering disciplines. They are attracting much attention, because of their great potential for modeling engineering problems [1719].

For instance, in [17] GA is used to identify the model parameters for an Ultracapacitor, based on time-domain data. In [18] GA is employed to extract the optimal model parameters based on the Hybrid Pulse Power Characterization (HPPC). Finally in [19] a GA is proposed for effectively achieving optimal component sizing of a hybrid energy storage system in an electric vehicle.

In [15] a new approach via multi-particle swarm optimization (MPSO) to solve the unit commitment (UC) problem is presented meanwhile in [13, 14] a simulated annealing algorithm (SAA) is presented to solve UC.

However the obtained results by EP, GA, SA, PSO and TS required a considerable amount of computational time especially for a large system size. This kind of techniques are not suitable for STGS problem due its NP-hard nature.

Because the main objective of an effective method for solving UC with LR is how to obtain feasible solutions. We present an alternative for generating feasible solutions in short computing time. Our proposal is based on an Adaptive Gibbs Sampling (AGS) algorithm.

The random search is one of the pillars of most heuristic methods to solve engineering optimization problems. The success of these methods to find good solutions (near-optimal) in an optimization problem is mainly achieved by tuning parameters. The perturbation of a parameter makes it possible to explore large regions of a landscape and potentially escape from local optima, resulting in the exploration of different local optima. The GA introduces a stochastic perturbation in their mutations [20], while SA presents these perturbations through their temperature levels and the cooling task [21]. For instance, a bad selection of the perturbation parameters in these methods could be result in a large risk of getting trapped in local regions. This fact points out of a need for an accurate selection of the step size parameters which dictate the amount of noise in the random search. On the other hand, the optimal scale of this perturbation, in order to achieve a good balance between exploration and exploitation, depends on the shape of the search landscape associated with the optimization problem. Therefore, this dependence makes the parameter selection a major issue in the design of heuristic algorithms.

In this study, we present an adaptive random search approach based on the Markov Chain Monte Carlo method to address a short term generation scheduling with network constraints. The key to the proposed method is that the noise level of the random search can be adaptively controlled according to the landscape. Thus, the noise intensity allows exploring the entire search space, and noise-reduction allows exploitation in the promising regions where local optima exist. The algorithm is well described and compared in unconstrained global optimization problems by [22]. The effectiveness and robustness of this method was proved in several complex problems in order to find reasonable quality solutions for global optimization problems. Motivated by the success of the performance of the method over complex problems, we decided to apply our method in this engineering optimization problem. Besides, we use a full-scale outer approximation commercial solver to compare the quality of the solutions provided by the proposed method.

This paper is organized as follows: in the following section we present the problem formulation of STGS. In section “Solution Methodology”, we introduce a novel optimization method which we call AGS algorithm based on infeasible solutions calculated for LR algorithm. We present our computational experience in the section this called, additionally we show our results on three test systems. Finally we draw our conclusions and summarize future work in the last section.

Problem formulation

The STGS with network constraints problem consists in determining the mix of generators and their estimated output level to meet the expected demand of electricity over a given time horizon (a day or a week), while satisfying the load demand, spinning reserve requirement and transmission network constraints.

In this work, we address a STGS based on the notation presented in [23], where network constraints are represented through a DC model (see [1]) and we consider a multi-period time horizon. The objective is to minimize a function that includes fixed costs, start-up costs and operating costs. A second order polynomial describes the variable costs as a function of the electric power. The following notation is used in the mathematical model:

Sets:

  1. J Set of indices of all power plants.
  2. K Set of period indices.
  3. N Set of indices of all nodes.
  4. Λn Set of indices of the power plants j at node n.
  5. Ωn Set of indices of nodes connected and adjacent to node n.

Constants:

  1. Aj Start up cost of power plant j.
  2. Bnm Subsceptance of line nm.
  3. Cnm Transmission capacity limit of line nm.
  4. Dnk Load demand at node n during period k.
  5. Ej(tjk) Nonlinear function representing the operational cost of power plant j as a function of its power output in period k.
  6. Fj Fixed cost of power plant j.
  7. Knm Conductance of line nm
  8. Rk Spinning reserve requirement during period k.
  9. Maximum power output of plant j.
  10. Minimum power output of plant j.
  11. nr Reference node with angle zero.

Decision variables:

  1. tjk Power output of plant j in period k.
  2. δnk Angle of node n in period k.

Objetive function:

  1. (1)

Constraints:

  1. Load balance (2)
  2. Spinning reserve (3)
  3. Generation limit (4)
  4. Transmission capacity limits (5)
  5. Start-up and shut-down of power units (6)
  6. Angular limit voltage (7)

The objective Eq (1) is to minimize the start up cost Ajyjk and the operating cost of each plant. The operating cost of each plant j includes a fixed cost Fjvjk and a variable cost Ej(tjk). There is a power balance Constraint (2) per node and time period. In each period, the production has to satisfy the demand and losses in each node. Power line losses are modeled through cosine approximation and it is assumed that the demand for electric energy is known, and is discretized into t periods. There are many approximations to model power line losses, some of them are linear, and some are non-linear. Further details of the cosine approximation can be found in [24]. Spinning reserve requirements are modeled in Eq (3). In each period the running units have to be able to satisfy the demand and the pre-specified spinning reserve. In Eq (4), each unit has a technical lower and upper bounds for the power production. The transmission capacity limits of lines in Eq (5) serve the purpose of avoiding problems in the dynamic stability of the system. The Constraint (6) describes how the units start-up, run and shut-down (a running unit cannot be started-up). Finally, the angle in all buses has lower and upper bounds given by Eq (7).

Solution methodology

In this paper, we extend the AGS algorithm (see [22]) for continuous optimization to tackle mixed-variable optimization problems. First, the LR was used just for bounding purposes, and then the AGS algorithm was used to construct feasible solutions in reasonable computational times.

Lagrangian relaxation framework

Because STGS is an NP-Hard problem, the global optimal solution cannot be obtained for large scale power systems. For this reason, we use LR to calculate a lower bound of the optimal solution. The main disadvantage of this method is the difference between primal and the dual solutions, which is the duality gap. This situation determines that solutions obtained by LR are infeasible for the original problem.

LR is based on the duality theory, which tries to find optimal dual variables that maximize the dual Lagrangian. These dual variables (Lagrangian multipliers) need to be updated in order to improve the lower bound. In this work, we use subgradient method to update the lagrangian multipliers. The parameters of the subgradient and other especifications are shown in [25].

For LR we decomposed the STGS problem into n subproblems, one per generation node [25]. DICOPT solver (see [26]) was used to maximize the Lagrangian bound. The LR algorithm was used just for bounding purposes. Since the main objective of an effective method for solving STGS with LR is to obtain feasible solutions, we present an alternative to constructing feasible solutions in short computing time based on the AGS algorithm.

In this paper, the objective function used by the AGS algorithm does not consider an explicit mechanism for handling constraints. For this reason we use the LR in the original problem. All of the system constraints are dualized in original objective function. This manipulation is necessary due to fact that AGS needs a function without constraints.

Applying Lagrange duality to the Constraints (2), (3) and (5) in STGS yields:

Dual Function: (8)

Dual Subproblem: (9) where λnk is the Lagrangian multiplier associated to a power balance constraint of node n in period k; μk is the Lagrangian multiplier associated to a spinning reserve requirement in period k; γnk, βnk are the Lagrangian multipliers associated to a transmission capacity limits of node n in period k. The above model is subject to Constraints (4), (6) and (7), called box constraints. Dualizing these constraints produces a dual subproblem that is less expensive to solve and speed up the solution of this subproblem. The algorithm initialized with a set of Lagrange multipliers, in this case, we defined these multipliers heuristically as a result of the knowledge of the original problem. Then, we use subgradient method to improve Lagrangian multipliers.

As the Lagrangian function constituted by the dualization of the complicating constraints is concave and non-differentiable, the AGS algorithm allows optimization of the dual function, since it is able to optimize such functions. The dual function can be decomposed into subproblems, one for each generation unit, however this procedure is not explored in this paper.

A brief summary of the subgradient method is given in Algorithm 1 below.

Algorithm 1: Subgradient method

Input: Instance of ZDS

Output: Ssub: Infeasible solution.

1 begin

2  Set k ← 0, h ← 0 and Ssub ← ∅

3  Choose and λ0 ∈ [0, 2]

4  repeat

5   Compute LZDS(αk) and a vector where it is achieved

6   Determine the subgradient direction dk of the function LZDS at αk

7   Determine step size tkλk(UBLZDS(αk))/||dk||2

8   Update multiplier vector by using αk+1 ← max{0, αk + tkdk}

9   if ( better than Ssub) then

10    

11   else

12    hh + 1

13    if (h is equal to some fixed number of iterations) then

14     λk+1λk/2

15   kk + 1

16  until (termination criteria are not met);

17  return Ssub

The adaptive gibbs sampling algorithm

The AGS algorithm is based on the Markov Chain Monte Carlo (MCMC) method combined with the one-dimensional Metropolis-Hastings algorithm and the multi-dimensional Gibbs sampler. This MCMC algorithm is called Metropolis-within-Gibbs (MWG) algorithm, and was suggested in [27]. The proposed optimization heuristic is a population-based method that generates a set of new potential solutions through a random process given by the MWG algorithm. The global information about the landscape is extracted through the population of solutions generated at each iteration. With the global information generated in the step described above, the random process identifies the most promising regions of the search space and then uses this information to generate another set of new potential solutions.

The main key of the proposed method is that the noise level of the random search can be adaptively controlled according to the landscape. Thus, the noise intensity allows exploring the entire search space, and the noise reduction allows the exploitation in the regions where local optima exist. In order to improve the obtained solutions, we consider coupling a local search into a random search process. The algorithm is well described and implement for unconstrained global optimization problems in [22]. A brief summary of the AGS method is given in Algorithm 2 below.

Algorithm 2: AGS(M, co, β, ε, λ)

Input: M, co, β, ε, λ ≔ AGS parameters.

Output: Sbest: An optimized and feasible solution.

1 begin

2  Sbest ← ∅

3   Initialization( )

4  repeat

5    Sampling()

6    Selection()

7    Update(co, λ)

9    Mutation()

10  if () then

11    Intensification()

12  if ( better than Sbest) then

13    

14  until (termination criteria are not met);

15  return Sbest

General form of AGS algorithm.

  1. Step 0. Initialization: Randomly select an initial solution within the feasible region and go to Step 1. Note that in this step we could also provide an initial solution obtained by other method. In this paper we use the solution that provides the subgradient method.
  2. Step 1. Sampling: Generate candidate points for each variable by where Z is a standard normal random variable and cn is a scale parameter. The candidate point will be accepted as the next value with probability If the candidate point is not accepted, then the current value of x is retained: . Simulating one value in turn for each individual variable is called one cycle of Gibbs sampling where a new vector solution is built. We can draw a population of M solutions by performing M Gibbs cycles. The output of the sampling step is a population and a vector of acceptance rates . Finally, go to Step 2.
  3. Step 2. Selection: Estimate a mode solution for each variable in the population and go to Step 3.
  4. Step 3. Update: Adjust the scale parameters by the following rule where is a constant chosen so that initially the acceptance rates are close to zero. The actual iteration number, τ, is initialized at the beginning with τ = 1 and will be increased iteratively by τ = τ + 1. Go to Step 4.
  5. Step 4. Mutation: Replace the variable value by a random value within the search space according to the following rule: where rand is an uniform random variable within the feasible region. If , where is an average of acceptance rates vector over all variables, go to Step 5; Otherwise return to Step 1.
  6. Step 5. Intensification: Improve the solution via a local search strategy. We can use an arbitrary local search method. In this paper, we use the Nelder-Mead method as a local search strategy [28]. Finally, return to Step 1.
  7. Parameter settings. The AGS parameters used in this paper are chosen such as to be a robust setting and therefore, in our experience, applicable to a wide range of optimization problems. The parameters used are: population size M = 100, initial scale parameters co = (0.1, …, 0.1), β = 0.7, ε = 0.95 and λ = 2.

The algorithm stops when the number of function evaluations is reached, or when the neighbor solution was not improved after the time period elapses. In Step 2, the global information generated by the population allows identifying the most promising (or more likely) regions of the search space. Therefore, the starting point for the next iteration will belong to such promising region. Note that in the Steps 3 and 4, the noise level of the random search can be adaptively controlled through acceptance rates. The acceptance rates provide information about the landscape. Values close to 1 on the acceptance rates allow to identify local optima and exploiting on them via local search. In addition, the mutation mechanism allows the escape and avoids being trapped in local optima, exploring other regions of the search space. In this way, the noise intensity allows exploring the entire search space, and the noise reduction allows exploitation in the promising regions where local optima exist.

AGS algorithm for STGS problem.

Before using the AGS algorithm to solve the STGS problem, we must define the representation of a solution. A solution is composed of two parts. The first part contains continuous variables, and the second contains integer variables, that is, . In the STGS problem, contains the variables of power output of plant j in period k and the angle of node n in period k, respectively. contains the variables vjk that represents if plant j is committed in period k and ynk represents if plant j is started up at the beginning of period k.

In the initialization process, Step 3 in Algorithm 2, an initial solution is provided by the method RL (see Section “Lagrangian Relaxation Framework”). Next, in Step 4, the integer variables are fixed, and continuous variables are modified by the Gibbs cycles. The integer variables can be modified after Step 9 in Algorithm 2, by the following rule:

After modifying the ynk value, vjk will make a copy of the value, that is, vnk = ynk.

The Algorithm 3 shows the exchange of information between the Lagrangian relaxation scheme and AGS algorithm.

Algorithm 3: AGSm(M, co, β, ε, λ)

Input: M, co, β, ε, λ ≔ AGS parameters.

Output: Sbest: An optimized and feasible solution.

1 begin

2  SbestSubgradient()

3  

4  repeat

5    Sampling()

6    Selection()

7    Update(co, λ)

8    Mutation()

9   Generate k at random, k = 1, 2, …K

10   if (ynk−1 = 1) then

11    ynk ← 0

12    vnkynk

13   if (ynk−1 = 0) then

14    rrand(0, 1)

15    ynkr

16    vnkynk

17   if () then

18     Intensification()

19   if ( better than Sbest) then

20    

21  until (termination criteria are not met);

22  return Sbest

Computational experience

The AGS algorithm was developed in C++ and simulated on a desktop computer with an AMD Phenom TM II N970 Quad-Core with a 2.2 GHz processor and 8 GB RAM. For Lagrangian Relaxation, the mathematical model of STGS was implemented in the modeling environment GAMS using the solver DICOPT (see [26] and [29]) for solving the MINLP problems (dual subproblems and full STGS scale model).

Three test systems are presented to evaluate the performance of the proposed AGS algorithm in the engineering optimization problems. The test cases used in the experimentation are power systems of 24, 104 and 118 units with a planning horizon of 24 h.

In order to illustrate the structure of test systems, we choose the basic information of the IEEE 24-bus test system. The single-line diagram of the IEEE 24-bus test system with 24 nodes, 24 thermal units and 38 transmission lines is shown in Fig 1. The data used in the IEEE-24 bus system were based on the original system which is available at [30]. The data of the minimum and maximum capacities generating units are presented in Table 1. Table 2 lists the costs, initial state and power output of each generating unit at time 0. In Table 3 and Fig 2 the load profile is illustrated. The node location of the loads, as well as load at each node as a percentage of the total system demand are presented in Table 4. The transmission lines data is given in Table 5. The lines are characterized by the nodes that are connected, as well as the reactance and the capacity of each line.

The 104-bus system data were extracted from [24] and correspond to the energy system of mainland Spain. The data used in the IEEE 118-bus system were extracted from the original system which is available at [31]. This data contains information about reactance and capacitance of the transmission lines, the demand profile, and costs of the generation.

Results

The computational complexity of the problem is shown in Table 6. This complexity depends on the number of thermal units connected in each test system. This table shows the number of variables and constraints for each test system.

As the AGS algorithm is a stochastic approach, 20 runs are executed for AGS on each test case, and the average costs of the 20 runs are determined. The comparison results of AGS algorithm and full-scale outer approximation commercial solver (DICOPT) are shown in Table 7. These results show that AGS solutions are very close to the GAMS solutions z*. The GAP in all cases is less than 0.05%. Fig 3 shows the average Gap for the three test systems using AGS. Standard deviation is also shown in Table 7. The GAP between the best solution through GAMS solver and the AGS is calculated by ZAGS GAP = (z* − zAGS)/z*.

Additionally, it can be seen in Table 7, that the CPU time of proposed algorithm is much less than DICOPT. Fig 4 shows the time evolution of the comercial solver DICOPT versus AGS algorithm. The maximum CPU time of AGS algorithm in comparison with DICOPT is 230 seconds. The minimum time improvement of AGS algorithm is obtained in the IEEE 24-bus test system. The total CPU time required to carry out these tests systems was about 180 seconds.

Conclusions and future work

In this paper, we present a novel optimization method which we call Adaptive Gibbs Sampling (AGS) algorithm to address a Short Term Generation Scheduling with network constraints, which determines the startup and shutdown schedules of thermal units over a given planning horizon. The proposed heuristic is a population-based method that generates a set of new potential solutions via random search strategy. The random search is based on a Markov Chain Monte Carlo method. The key to the proposed method is that the noise level of the random search is adaptively controlled in order to exploring and exploiting the entire search space. In order to improve the solutions, we consider coupling a local search into a random search process. This paper proposes an enhanced method that combines Lagrangian relaxation and AGS to solve STGS. The AGS algorithm is based on infeasible solutions calculated for LR algorithm.

We evaluated the performance of AGS algorithm against a full-scale outer approximation commercial solver in order to compare the quality of the solutions provided by the proposed method. Several groups of instances are tested to evaluate the performance of the proposed heuristic. The experimental results show that AGS algorithm is robust, as it is capable of finding reasonable quality solutions for this engineering optimization problem. Our AGS method converges to the near-optimal solution at a faster rate than the direct solution obtained by the solver DICOPT. In addition, AGS produces much tighter bounds on the optimal solution values than standard Lagrangian Relaxation.

In the future, we will extend the experimentation in other test systems with different structures in order to evaluate the performance of the proposed algorithm, and compare their results against other heuristic methods.

Supporting information

S1 File. Datafile for the IEEE 24-bus test system.

https://doi.org/10.1371/journal.pone.0172459.s001

(TXT)

S2 File. Datafile for the IEEE 118-bus test system.

https://doi.org/10.1371/journal.pone.0172459.s002

(TXT)

Author Contributions

  1. Conceptualization: JAM JV.
  2. Data curation: JAM JV HJS.
  3. Formal analysis: JAM.
  4. Funding acquisition: JAM JV HJS.
  5. Investigation: JAM JV.
  6. Methodology: JAM JV.
  7. Project administration: JAM JV HJS.
  8. Resources: JAM JV HJS.
  9. Software: JAM JV HJS.
  10. Supervision: JAM.
  11. Validation: JV.
  12. Visualization: JAM JV.
  13. Writing – original draft: HJS.
  14. Writing – review & editing: HJS.

References

  1. 1. Wood AJ, Wollenberg BF. Power Generation, Operation, and Control. A Wiley-Interscience publication. Wiley; 1996. Available from: https://books.google.com.mx/books?id=xg6yQgAACAAJ.
  2. 2. Tseng CL. On Power System Generation Unit Commitment Problems. University of California, Berkeley; 1996. Available from: https://books.google.com.mx/books?id=UCTvHwAACAAJ.
  3. 3. Saneifard S, Prasad NR, Smolleck HA. A fuzzy logic approach to unit commitment. Power Systems, IEEE Transactions on. 1997;12(2):988–995.
  4. 4. Valenzuela J, Smith A. A Seeded Memetic Algorithm for Large Unit Commitment Problems. Journal of Heuristics. 2002;8(2):173–195.
  5. 5. Sasaki H, Watanabe M, Kubokawa J, Yorino N, Yokoyama R. A solution method of unit commitment by artificial neural networks. Power Systems, IEEE Transactions on. 1992;7(3):974–981.
  6. 6. Ruzic S, Rajakovic N. A new approach for solving extended unit commitment problem. Power Systems, IEEE Transactions on. 1991;6(1):269–277.
  7. 7. Virmani S, Adrian EC, Imhof K, Mukherjee S. Implementation of a Lagrangian relaxation based unit commitment problem. Power Systems, IEEE Transactions on. 1989;4(4):1373–1380.
  8. 8. Ferreira LAFM. On the duality gap for thermal unit commitment problems. In: Circuits and Systems, 1993., ISCAS –93, 1993 IEEE International Symposium on; 1993. p. 2204–2207 vol.4.
  9. 9. Juste KA, Kita H, Tanaka E, Hasegawa J. An evolutionary programming solution to the unit commitment problem. Power Systems, IEEE Transactions on. 1999;14(4):1452–1459.
  10. 10. Rajan CCA, Mohan MR. An evolutionary programming-based tabu search method for solving the unit commitment problem. Power Systems, IEEE Transactions on. 2004;19(1):577–585.
  11. 11. Kazarlis SA, Bakirtzis AG, Petridis V. A genetic algorithm solution to the unit commitment problem. Power Systems, IEEE Transactions on. 1996;11(1):83–92.
  12. 12. Swarup KS, Yamashiro S. Unit commitment solution methodology using genetic algorithm. Power Systems, IEEE Transactions on. 2002;17(1):87–91.
  13. 13. Mantawy AH, Abdel-Magid YL, Selim SZ. A simulated annealing algorithm for unit commitment. Power Systems, IEEE Transactions on. 1998;13(1):197–204.
  14. 14. Zhuang F, Galiana FD. Unit commitment by simulated annealing. Power Systems, IEEE Transactions on. 1990;5(1):311–318.
  15. 15. Zhao B, Guo CX, Bai BR, Cao YJ. An improved particle swarm optimization algorithm for unit commitment. International Journal of Electrical Power And Energy Systems. 2006;28(7):482–490.
  16. 16. Mantawy AH, Abdel-Magid YL, Selim SZ. Unit commitment by tabu search. Generation, Transmission and Distribution, IEEE Proceedings-. 1998;145(1):56–64.
  17. 17. Zhang L, Hu X, Wang Z, Sun F, Dorrell DG. Fractional-order modeling and State-of-Charge estimation for ultracapacitors. Journal of Power Sources. 2016;314:28–34.
  18. 18. Zhang L, Wang Z, Hu X, Sun F, Dorrell DG. A comparative study of equivalent circuit models of ultracapacitors for electric vehicles. Journal of Power Sources. 2015;274:899–906.
  19. 19. Zhang L, Dorrell DG. Genetic Algorithm based optimal component sizing for an electric vehicle. In: IECON 2013—39th Annual Conference of the IEEE Industrial Electronics Society; 2013. p. 7331–7336.
  20. 20. Tang KS, Man KF, Kwong S, He Q. Genetic algorithms and their applications. Signal Processing Magazine, IEEE. 1996;13(6):22–37.
  21. 21. Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by Simulated Annealing. Science. 1983;220(4598):671–680. pmid:17813860
  22. 22. Velasco J, Saucedo-Espinosa MA, Escalante HJ, Mendoza K, Villarreal-Rodríguez CE, Chacón-Mondragón OL, et al. An Adaptive Random Search for Unconstrained Global Optimization. Computación y Sistemas. 2014;18:243–257.
  23. 23. Marmolejo JA, Rodríguez R. Fat Tail Model for Simulating Test Systems in Multiperiod Unit Commitment. Mathematical Problems in Engineering. 2015;.
  24. 24. Alguacil N, Conejo AJ. Multiperiod optimal power flow using Benders decomposition. Power Systems, IEEE Transactions on. 2000;15(1):196–201.
  25. 25. Marmolejo JA, Litvinchev I, Aceves R, Ramírez JM. Multiperiod optimal planning of thermal generation using cross decomposition. Journal of Computer and Systems Sciences International. 2011;50(5):793–804.
  26. 26. GAMS Development Corporation DC Washington. GAMS: The Solver Manuals; CONOPT; CPLEX; DICOPT; LAMPS; MILES; MINOS; OSL; PATH; XA; ZOOM. GAMS Development Corporation; 1994. Available from: https://books.google.com.mx/books?id=qaFgngEACAAJ.
  27. 27. Tierney L. Markov Chains for Exploring Posterior Distributions. Ann Statist. 1994;22(4):1701–1728.
  28. 28. Nelder JA, Mead R. A Simplex Method for Function Minimization. The Computer Journal. 1965;7(4):308–313.
  29. 29. Brooke A, Kendrick D, Meeraus A. GAMS: A user–s guide. Boyd & Fraser Publishing Company; 1998.
  30. 30. Institute IT. IEEE 24 bus system; 2016. Available from: http://icseg.iti.illinois.edu/ieee-24-bus-system/.
  31. 31. Institute IT. IEEE 118 bus system; 2016. Available from: http://icseg.iti.illinois.edu/ieee-118-bus-system/.