Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Spectral-like conjugate gradient methods with sufficient descent property for vector optimization

  • Jamilu Yahaya,

    Roles Conceptualization, Formal analysis, Methodology, Software, Writing – original draft, Writing – review & editing

    Affiliations Center of Excellence in Theoretical and Computational Science (TaCS-CoE) and KMUTTFixed Point, Research Laboratory, Room SCL 802 Fixed Point Laboratory Science Laboratory Building, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), Thung Khru, Bangkok, Thailand, Department of Mathematics, Faculty of Physical Sciences, Ahmadu Bello University Zaria, Kaduna, Nigeria

  • Poom Kumam ,

    Roles Funding acquisition, Project administration, Supervision, Validation

    poom.kum@kmutt.ac.th

    Affiliations Center of Excellence in Theoretical and Computational Science (TaCS-CoE) and KMUTTFixed Point, Research Laboratory, Room SCL 802 Fixed Point Laboratory Science Laboratory Building, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), Thung Khru, Bangkok, Thailand, NCAO Research Center, Fixed Point Theory and Applications Research Group, Center of Excellence in Theoreticaland Computational Science (TaCSCoE), Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), Thung Khru, Bangkok, Thailand

  • Sani Salisu ,

    Contributed equally to this work with: Sani Salisu, Kanokwan Sitthithakerngkiet

    Roles Formal analysis, Methodology, Validation, Writing – review & editing

    Affiliation Department of Mathematics, Faculty of Natural and Applied Sciences, Sule Lamido University Kafin Hausa, Jigawa, Nigeria

  • Kanokwan Sitthithakerngkiet

    Contributed equally to this work with: Sani Salisu, Kanokwan Sitthithakerngkiet

    Roles Funding acquisition, Investigation, Project administration, Validation

    Affiliation Intelligent and Nonlinear Dynamic Innovations Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Bangsue, Bangkok, Thailand

Abstract

Several conjugate gradient (CG) parameters resulted in promising methods for optimization problems. However, it turns out that some of these parameters, for example, ‘PRP,’ ‘HS,’ and ‘DL,’ do not guarantee sufficient descent of the search direction. In this work, we introduce new spectral-like CG methods that achieve sufficient descent property independently of any line search (LSE) and for arbitrary nonnegative CG parameters. We establish the global convergence of these methods for four different parameters using Wolfe LSE. Our algorithm achieves this without regular restart and assumption of convexity regarding the objective functions. The sequences generated by our algorithm identify points that satisfy the first-order necessary condition for Pareto optimality. We conduct computational experiments to showcase the implementation and effectiveness of the proposed methods. The proposed spectral-like methods, namely nonnegative SPRP, SHZ, SDL, and SHS, exhibit superior performance based on their arrangement, outperforming HZ and SP methods in terms of the number of iterations, function evaluations, and gradient evaluations.

1 Introduction

In recent times, the successful application of CG methods in solving vector optimization problems (VOPs) has attracted considerable attention as detailed in [1]. Since then, these approaches have gained recognition for their simplicity and minimal memory requirements, thereby proving effective (see, for example, [2, 3] and their references).

Before exploring VOPs, let us consider some well-known CG parameters related to the natural unconstrained optimization problem, which focuses on minimizing The parameters include the βk of Polak-Ribiére–Polyak (PRP) [4], Hestenes-Stiefel (HS) [5], Dai–Liao (DL) [6], and Hager–Zhang (HZ) [7, 8]. Other well-known CG methods include: a survey on DL [9], Fletcher-Reeves (FR) [10], Conjugate Descent (CD) [11], Dai-Yuan (DY) [12], and Liu-Storey (LS) [13]. In most cases, the convergence of the CG method based on these parameters is achieved only if the search direction attains a decent property or sufficient descent condition.

Another crucial iterative technique for optimization is the spectral gradient method introduced in [14]. This method has substantial performance. Later, in [15] the spectral gradient and CG method were combined to give the first spectral conjugate gradient (SCG) method. The method used the following search direction (1) with spectral parameter θk−1 and βk, see [15]. The SCG method has been extensively investigated by several authors, including spectral CG with sufficient descent property [16, 17], spectral CG involving RMIL [18], and self-adjusting spectral involving hybrid DL CG [19].

Most of the parameters mentioned above are considered for VOP, in which the objective function F represents a function that maps vectors from an n-dimensional space of real numbers to vectors in an m-dimensional space of real numbers. F is also assumed to be a continuously differentiable function. Let Q be a subset of the m-dimensional space of real numbers, closed, convex, and a pointed cone with a nonempty interior. The unconstrained VOP is defined as (2) which optimizes F, where Note that, F = (F1, ⋯, Fm)T for some i = 1, ⋯, m. Moreover, if Q is the m-dimensional space of real numbers with nonnegative components, then VOP reduces to multi-objective optimization (MOO). Also, if Q consists of only nonnegative real numbers and m = 1, then (2) reduces to single-objective optimization (SOO).

Several applications in industry and finance are considered instances of VOP, where multiple objective functions are optimized concurrently. Consequently, it becomes imperative to determine a set of optimal points for VOP [2026]. Because a total order is lacking in where m ≥ 2, the solution to VOP consists of a set of non-dominated points, often referred to as Pareto optimal or efficient points. The challenge lies in identifying the solutions that strike the most advantageous balance. It is important to mention that, (2) signifies minimizing F with respect to the ordering cone Q.

One way to approach VOPs is the scalarization techniques, which parameterized single-objective optimization problems to yield Pareto-optimal points. The decision-maker must choose the parameters, as they are not predetermined. Making this choice can pose significant challenges or become impossible for some problems [2729]. Consequently, to overcome these drawbacks, some descent-based algorithms have been suggested as solution approaches for VOP. Thanks to the works in [30, 31]. Subsequently, numerous other studies have followed this trajectory, exploring similar directions, see the survey on MOO descent methods in [32] and the references [3336].

In [1], the conjugate parameters of [4, 5, 10, 12, 37] are considered for VOPs. Their study encompassed numerical implementations of these methods, which were analyzed and discussed. Among these methods, as per the considered test problems, the nonnegative PRP and HS showed exceptional performance in comparison to the others. Conversely, the CD and DY methods outperformed FR in terms of efficiency. Thereafter, Goncalves and Prudente [38] extended the Hager-Zhang CG method for VOPs. For this method, the search direction does not guarantee the descent condition, even with an exact LSE. To address this issue, the authors proposed a self-adjusting HZ method, utilizing a sufficiently accurate LSE, which possesses the descent property. Other works in this direction include the work of [39] based on sufficiently accurate LSE, the first hybrid CG methods proposed for VOPs in [3] and some modified CG methods in [40].

Following the works in [15, 41, 42], He et al. [43] proposed the SCG methods for VOPs. In contrast to scalar optimization, the extension of SCG does not yield a descent property. As a result, the authors provided a modified self-adjusting SCG algorithm to induce the property through the algorithm. They established the convergence using a sufficiently accurate LSE that satisfies the Wolfe LSE. It is therefore natural to ask if there could be an SCG method for which the descent property is guaranteed without inducing it into the algorithm.

In this paper, we affirmatively address the above question. We define a new form of search direction in the vector context, inspired by the work of [44]. Our method yields a sufficient descent property independent of any LSE for arbitrary nonnegative conjugate parameters. We consider four of these parameters and establish their convergence using Wolfe LSE. We provide computational experiments to validate our findings. The outcomes here are compared with HZ and SP methods, showing that our proposed methods are promising.

The presentation of the work proceeds as follows: Section 2 discusses the basic notions and preliminaries. Section 3 presents the proposed algorithm and its convergence properties. Section 4 presents and discusses computational experiments, and Section 5 provides substantial remarks.

2 Preliminaries

This section presents the basic notions related to VOPs. For further details, see [1, 31, 4547]. Throughout the subsequent sections, (2) signifies minimizing F with respect to the ordering cone Q. Moreover, for a generic Q, the partial order in generated by Q, ≼Q is in the sense that zQ yyzQ, and ≺Q, is given by zQ yyzint(Q). Moreover, the idea of optimality is substituted with “Pareto-optimal or efficient” and “weak Pareto-optimal or efficient” in VOP.

Definition 1 [32] A vector is Pareto-optimal (efficient) if and only ifa vector s.t and .

Definition 2 [32] A vector is weak Pareto-optimal (weak efficient) if and only ifa vector s.t .

Remark 3 Definition 1 implies Definition 2, but the converse is not generally true.

Below are some properties of the cone Q, including its positive polar cone: Notice that Q = Q**, considering the convexity and closedness of Q, The cone formed by is denoted as cone(W), and conv(W) represents the convex hull of W. Now, consider KQ*, where 0 ∉ K and K is compact. We define Q* as follows: (3) Now, for a generic set Q, we defined K as follows: (4) which satisfies the condition in (3). In this work, we adopt the definition of K provided in (4). The Jacobian of F at t is denoted as JF(t) and the image of JF(t) on is represented as Image(JF(t)).

The following (5) is considered the necessary condition for Qoptimality of If condition (5) is satisfied, the point is termed as stationary or Q-critical. Conversely, if is not Q-critical, there exists a vector s.t , indicating that b is a Q-descent direction (Q-DD) at . See, for example [47].

Define as The ζ is well-defined, since K is compact. We observed that ζ also provides some features of −Q and −int(Q) as follows: and .

Next, let us define by (6)

Definition 4 We have a Q-DD if f(t, d) < 0, and a Q-critical point if f(t, d) ≥ 0 for all d.

Definition 5 If satisfies (7) then we have a sufficient descent condition (SDC), where c > 0.

Lemma 6 [31]. If is in C1. Then, the following statements hold:

(a) , for and ℓ ≥ 0;

(b) The mapping (t, d) ↦ f(t, d) is continuous;

(c) |f(t, d) − f(t′, d)| ≤ ‖JF(t) − JF(t′)‖|‖d‖, ;

(d) IfJF(t) − JF(t′)‖ ≤ Ltt′‖, thenf(t, d) − f(t′, d)‖ ≤ Ld‖‖tt′‖.

For CG method of VOPs, we define by (8) and by (9) The iterate begins with arbitrary and updated (10) where k > 0 is called the step size, computed via an LSE method, and the search direction dk is defined as (11) with βk as the conjugate parameter. The VOPs versions of parameters proposed in [1] are as follows: (12) where f(⋅, ⋅) is defined by Eq (6). The vector version of Hager-Zhang was given in [38] as follows: (13) where and ω1 ≔ −f(tk, h(tk)) + f(tk−1, h(tk)), ω2f(tk, dk−1), ω3f(tk, h(tk−1)) − f(tk−1, h(tk−1)), ω4f(tk−1, dk−1), with (14) where

Now, consider the problem: (15) see for instance, [46].

Here, we define the most commonly known LSE used for conjugate gradient algorithms, namely the exact and inexact LSE, which are defined as follows: we have an exact LSE if > 0 is computed as follows: (16) As stated in [1], the standard Wolfe LSE and the strong Wolfe LSE for VOPs. It state that > 0 fulfills the standard Wolfe condition (WWC) if (17) We find > 0 by means of the strong Wolfe condition (SWC) if the following conditions are satisfied: (18) where eQ s.t (19) and 0 < ρ < σ < 1.

Lemma 7 [31] Consider h(t) and v(t) as defined in (8) and (9) respectively, then we have

  1. (a). if t is a Q-critical point, then h(t) = 0 and v(t) = 0,
  2. (b). let t be not Q-critical point, then h(t) ≠ 0, v(t) < 0, and h(t) is a Q-DD,
  3. (c). the maps, h and v are continuous.

3 Spectral-like algorithm and convergence properties

In this section, we present the main algorithm and its convergence properties. However, before delving into details, it is important to consider the following throughout this work: according to Lemma 7 (b), we have h(tk) ≠ 0. Otherwise, tk is a Q-critical or stationary point.

Consider the vector version of the Dai–Liao conjugate parameter as follows: (20) where α > 0. Now, we consider the following nonnegative parameters (21) and the iterative: (22) where dk is defined as follows: (23) As mentioned in the preliminary section, k > 0 is computed via an LSE strategy, sk−1 = tktk−1 = k−1dk−1, and βk is a nonnegative parameter. Observe that by Lemma 7 (b), we have f(tk, h(tk)) ≠ 0.

The following sufficient descent condition follows from Lemma 6(a) and (23) (24) This implies that we always have (7) with c ≤ 1, irrespective of LSE and the βk parameter.

Remark 8 It is easy to see that (23) can be expressed as (25) where This implies that our method is a special case of SCG method with . Thus, we now have a spectral CG that achieved (7) without any LSE. Note that when employing an exact LSE here, (23) becomes the well-known nonlinear CG method (11) with dk−1 replaced by sk−1.

Before we proceed to the convergence analysis, we will require the following significant assumptions.

Assumption 9 Let Q be composed of a finite number of elements andan open set Ω s.t where andL > 0 s.tJF(t) − JF(t′)‖ ≤ Ltt′‖ for all t, t′ ∈ Ω.

Assumption 10 Let and Dk+1Q Dk, for all k, then s.t .

Assumption 11 The set is bounded.

Note that, by Assumption 11 we have that this implies that, ∃ s.t (26) for all k. Therefore, we have from Lemma 6(d) that ∃ γ > 0 s.t (27) for all k. Also, by the boundedness of {f(tk, h(tk))} and Lemma 7(b), there exists δ > 0 s.t (28) for all k. By (27) and (28), we have (29) with ‖q‖ = 1. We stress that these assumptions naturally extend those considered in single-objective optimization.

We will need the following Zoutendijk Lemma in our convergence analysis.

Lemma 12 [1] Let Assumptions 9 and 10 hold. Consider (10), with Q-DD dk and ℓk satisfies (17). Then, (30)

Next, we present the spectral-like algorithm for the nonnegative CG methods for VOPs.

Algorithm 1: A spectral-like algorithm for VOPs

Step 0: Take and initialize k ← 1.

Step 1: Compute h(tk) and v(tk) as in (8) and (9), respectively.

Step 2: If v(tk) = 0, then stop. Otherwise, compute k > 0 s.t condition (18) is satisfies.

Step 3: Compute dk as defined in (23), where βk is a nonnegative parameter.

Step 4: Set tk+1 = tk + kdk, for kk + 1 and move to Step 1.

Remark 13 (i) Step 1 is well explained by Lemma 7.

(ii) In Step 2, we use a LSE procedure to compute ℓk > 0 fulfilling (18). We emphasize there exist ℓk > 0 fulfilling (18) under Assumptions 9 and 11 as detailed in [1, 48].

(iii) In Step 3, we compute dk and utilize any of the conjugate parameters in (21) one at a time and move to the next step, where the iterates are updated continuously.

One of the sufficient conditions for establishing the convergence here is to estimate the norm of its search direction as follows: if k = 0 or f(tk, sk−1) < 0, then we have the following estimate

Otherwise, we have from (23) that (31)

Moreover, we get (32)

Note that, by (27) and (28), we have (33) and (34)

Now, applying (33) and (34) in (32), we have (35)

Now, using (35) in (31), we have

This can further be written as (36) where .

Next, we estimate the modulus of ϕk by means of a so called property (*), this property was introduced by Gilbert and Nocedal [49] and recently extended in [1]. This property(*) indicates that βk is small whenever sk−1 is small.

Property (*): Consider Algorithm 1 and assume that there exists s.t (37)

Then we have property (*) if there exist p > 1 and λ > 0 s.t (38) and

We have by (28) and (38) that (39) holds with c1 = p/γ.

The result below indicates that, given some mild assumptions, a CG method fulfilling property (*) converges.

Theorem 14 [1] Given a CG Algorithm, assuming that Assumptions 9 and 11 are satisfied for all k, s.t (a) βk ≥ 0; (b) dk is a Q-DD of F at tk; (c) ℓk satisfies (18); (d) property (*) hold. Then,

Remark 15 It is evident that standard Wolfe (17) holds whenever strong Wolfe (18) is assumed. Thus, we assume only strong Wolfe condition for the subsequent results.

Theorem 16 Let Assumptions 9 and 11 hold. Consider Algorithm 1, and let the sequence {tk, dk} be generated using any of the following:

  1. (i) ;
  2. (ii) ;
  3. (iii) ;
  4. (iv) .

If ℓk satisfies (18), then, (40)

Proof We prove that condition (d) of Theorem 14 holds for all the cases (i)-(iv). To show this, we recall from [39] that, it is enough to show existence of a nonnegative constant ϵ s.t (41)

Note that, by (37), Lemma 7(b), and (29), we get (42)

(i) Beginning with we have

Now, by Lemma 6(c)-(d) and (28), we get (43) where ‖sk−1‖ = ‖tktk−1‖.

Again, by (6) and (27), we estimate the following, (44)

Thus, by (43) and (42), we have where Therefore, we have property (*).

(ii) we have

Thus, by (43), (42) and (17), we have where Therefore, we have property (*).

(iii) Following similar arguments with previous estimate, we have

Thus, by (43), we have (45) and by Lemma 6 (c)-(d) (46)

Observe that, by (17) and (24), we have

By Lemma 7 (b) and (37), we have (47)

Again, by (17) and from (24), we have that ω4 < 0 for all k. Moreover, using the definition of ω2 and ω4 and the fact that 0 < σ < 1, we get (48)

Again, since ω4 < 0, for all k, we have that ω2ω2ω4. Thus, by (47) and (48), we get (49)

Therefore, from (45), (46), (47), (48), and (49), we have where Hence, we have property (*).

(iv) Then, we have

By (17) and (24), we have (50) for all k ≥ 1. Thus, (51)

Using (44) and (43) in (51), we get

We see by (42) that Therefore, Hence, thereby completing the proof.

The lemma presented herein aligns with Lemma 6 as documented in [38]. It plays an important role in proving the convergence of the proposed CG methods.

Lemma 17 Suppose Assumptions 9 and 11 hold. Consider Algorithm 1, where ℓk is obtained by (18). If there exists s.t (52) and property (*) holds with βk ≥ 0. Then, dk ≠ 0 and where .

Proof Notice that dk ≠ 0, otherwise (24) is not true. Thus, wk is well-define. From the fact that dk is Q-DD at tk and k fulfills (18), we have the Zoutendijk condition (30). Now, applying (52), Lemma 7 (b), and (7), we get (53)

Next, we define and Then, (23) is rewritten as

Observe that (54) and by applying (54), we have

Now, we have (55)

It follows from βk ≥ 0, (55) and the triangle inequality, that (56)

Since sk−1 = tktk−1, then from (26), we have that Moreover, it follows from (28), (42) and (44) that

Also, from (56), we have

By (28) and (53), we get

This complete the proof.

Next, we present the main convergence theorem, considering ϕ instead of βk. The proof follows directly from [Theorem 2, p. 905, [38]], and thus, it is omitted.

Theorem 18 Given Algorithm 1 and assuming that Assumptions 9 and 11 hold. Then

4 Numerical results and discussions

In this section, we evaluate the performance of the proposed spectral-like Algorithm 1 by examining the following methods: PRP+, HS+, HZ, and DL+. We aim to gauge their efficiency and robustness in addressing benchmark test problems sourced from various MOO research articles. The algorithms were coded in Fortran 90. Subsequently, in the context of MOO, we define e as Q as and K as .

Below, we present a summary of the methods under consideration, including their initial parameter values. This encompasses both our proposed methods and those employed for comparison purposes:

  • SPRP+: a spectral-like PRP+ method given by Algorithm 1 with βk in (21);
  • SHS+: a spectral-like HS+ method given by Algorithm 1 with βk in (21);
  • SHZ+: a spectral-like HZ method given by Algorithm 1 with βk in (21) with μ = 1.0;
  • SDL+: a spectral-like DL+ method given by Algorithm 1 with βk in (21) and α = 0.1.

Our findings are compared with the following CG methods:

  • HZ: a Hager-Zhang CG algorithm given in [38] with μ = 1.0,;
  • SP: a spectral CG method (SCG) given in [43].

An essential part of the algorithms include computing the steepest descent direction, h(t). To achieve this, we utilize Algencan to solve problem (15); for more details, refer to [50]. In addition, the selection of the step size was performed using a LSE strategy that fulfills (18). The same LSE, employed for both HZ and SP, was used for all the proposed methods. Below are the initial parameters utilized in the LSE procedure for the implementation of the proposed methods: On the other hand, we have by Lemma 7 that is a stationary point if and only if v(t) = 0. Consequently, the experimentation was conducted by running all the implemented method up to the point of convergence, which is assumed to be or whenever, the maximum number of iterations, #maxIt = 5000 is exceeded. In this case, the v(t) is defined by (9) and the machine precision, eps ≈ 2.22 × 10−16.

Details of the test problems under consideration are provided in Table 1. The first column provides the names of the problem, for instance, “Lov1” aligning to the first problem introduced by A. Lovison in [51], and “SLCDT1” corresponding to the first problem given by Schütze, Lara, Coello, Dellnitz, and Talbi in [52]. All the remaining problems follow the same pattern with their corresponding references. The second column gives the corresponding references, while the third column is assigned for “n” the number of variables and the fourth column is assigned as “m” the objective functions of the problems. A box constraint was utilized for the starting points, defined as , where the lower bound is indicated in the fifth column and the upper bound is indicated in the last column.

In Table 2, the results of the proposed algorithms under the respective test problems are presented in comparison with HZ and SP CG methods. All the methods ran 100% successfully or reached a critical point. The table arrangements are: ‘Iter,’ ‘FunEva,’ and ‘GradEva.’ Thus, they denote the median number of iterations, functions, and gradient evaluations, respectively.

thumbnail
Table 2. Performance of the proposed spectral-like methods in comparison with HZ and SP.

https://doi.org/10.1371/journal.pone.0302441.t002

In a VOP setting, the primary objective is to approximate the Pareto frontier of the given problem. To achieve this, we employed a methodology where each implemented method underwent 200 runs for each problem, and Iter, FunEva, and GradEva were recorded for each run. The methods began with initialization using uniformly distributed random points within the problem’s specified bounds, as detailed in Table 1. The comparison metrics employed here include the Iter, FunEva, and GradEva.

In order to guarantee an equitable and significant comparison of algorithms, we employed the well-established performance profile as documented in [53]. The profile visually represents algorithm performance, which compares algorithmic performance across various metrics, offering a comprehensive assessment of efficiency and robustness. This tool enables us to concisely summarize the experimental data showcased in Tables 2 and 3. Furthermore, the performance profile offers insights into the effectiveness of the methods proposed in this study in comparison to the HZ and SP CG methods.

Based on the considered test problems, the performance profile for Iter is displayed in Fig 1. It is observed that the SPRP+, SHZ+, SDL+, and SHS+ methods exhibit the best performance according to their appearance, outperforming the other compared methods. On the other hand, the HZ and SP methods are the least-performing. Additionally, Fig 2 illustrates FunEva, showing that the SPRP+, SHZ+, SDL+, and SHS+ methods required fewer function evaluations compared to the HZ and SP methods. Finally, Fig 3 presents GradEva, where the SPRP+, SHZ+, SDL+, and SHS+ methods evaluated fewer gradients than the HZ and SP methods. This observation aligns with the fact that our methods satisfy the SDC independently of any LSE.

5 Concluding remarks

We introduced new spectral-like CG methods that achieve sufficient descent property independently of any LSE and for arbitrary nonnegative CG parameters. Four well-known conjugate parameters, PRP+, HS+, HZ+, and DL+ are considered and thus are referred to as SPRP+, SHS+, SHZ+, and SDL+, respectively. We established the convergence of the proposed methods using Wolfe LSE. Our algorithms achieved this without regular restart and assumption of convexity regarding the objective functions. The sequences generated by our algorithm identify points that satisfy the first-order necessary condition for Pareto optimality. We conducted computational experiments which show the implementation and efficiency of the methods with a promising performance. The proposed spectral-like methods, SPRP+, SHZ+, SDL+, and SHS+, exhibited the best performance according to their appearance, outperforming HZ, and SP methods in all the considered metrics, the number of iterations, function, and gradient evaluations.

A challenging task to consider in the future is the three-term CG method, which is of special interest in yielding sufficient descent of the search direction. This work may be challenging, considering that f as defined in (6) is only sublinear with respect to the second variable. However, the work herein provides insight into the three-term method.

Acknowledgments

The first author acknowledges the support provided by the Petchra Pra Jom Klao Ph.D. scholarship of King Mongkut’s University of Technology Thonburi (KMUTT) with No. 23/2565 and Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Additionally, the authors acknowledge the references [1, 39] for making their Fortran codes available: https://github.com/lfprudente/CGMOP.

References

  1. 1. Lucambio Pérez L. R. and Prudente L. F. Nonlinear conjugate gradient methods for vector optimization. SIAM Journal on Optimization. 2018, 28(3):2690–2720.
  2. 2. Hu Q. Zhu L. and Chen Y. Alternative extension of the Hager–Zhang conjugate gradient method for vector optimization. Computational Optimization and Applications. 2004, pages 1–34.
  3. 3. Yahaya J. and Kumam P. Efficient hybrid conjugate gradient techniques for vector optimization. Results in Control and Optimization. 2024, 14, pages 100348. Elsevier.
  4. 4. Polak E. and Ribiere G. Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle. Série rouge. 1969, 3(16):35–43.
  5. 5. Hestenes M. R. and Stiefel E. Methods of conjugate gradients for solving. Journal of Research of The National Bureau of Standards. 1952, 49(6):409.
  6. 6. Dai Y.H. and Liao L.Z. New conjugacy conditions and related nonlinear conjugate gradient methods. Applied Mathematics and Optimization. 2001, 43:87–101.
  7. 7. Hager W. W. and Zhang H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM Journal on Optimization. 2005, 16(1):170–192.
  8. 8. Hager W. W. and Zhang H. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization. 2006, 2(1):35–58.
  9. 9. Babaie-Kafaki S. A survey on the Dai-Liao family of nonlinear conjugate gradient methods. RAIRO-Operations Research. 2023, 57(1):43–58.
  10. 10. Fletcher R. and Reeves C. M. Function minimization by conjugate gradients. The Computer Journal. 1964, 7(2):149–154.
  11. 11. Fletcher R. Unconstrained optimization. Practical Methods of Optimization, 1, 1980.
  12. 12. Dai Y.H. and Yuan Y.X. A nonlinear conjugate gradient method with a strong global convergence property. SIAM Journal on Optimization. 1999, 10(1):177–182.
  13. 13. Liu Y. and Storey C. Efficient generalized conjugate gradient algorithms, part 1: theory. Journal of Optimization Theory and Applications. 1991, 69(1):129–137.
  14. 14. Barzilai J. and Borwein J. M. Two-point step size gradient methods. IMA Journal of Numerical Analysis. 1988, 8(1):141–148.
  15. 15. Birgin E. G. and Martínez J. M. A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization. 2001, 43:117–128.
  16. 16. Jian J. Liu P. Jiang X. and Zhang C. Two classes of spectral conjugate gradient methods for unconstrained optimizations. Journal of Applied Mathematics and Computing. 2022, 68(6):4435–4456.
  17. 17. Mrad H. and Fakhari S. M. Optimization of unconstrained problems using a developed algorithm of spectral conjugate gradient method calculation. Mathematics and Computers in Simulation. 2024, 215:282–290.
  18. 18. Salihu N. Kumam P. Awwal A. M. Sulaiman I. M. and Seangwattana T. The global convergence of spectral rmil conjugate gradient method for unconstrained optimization with applications to robotic model and image recovery. Plos One. 2023, 18(3):e0281250. pmid:36928212
  19. 19. Shao H. Guo H. Wu X. and Liu P. Two families of self-adjusting spectral hybrid dl conjugate gradient methods and applications in image denoising. Applied Mathematical Modelling. 2023, 118:393–411.
  20. 20. De P. Ghosh J. B. and Wells C. E. On the minimization of completion time variance with a bicriteria extension. Operations Research. 1992, 40(6):1148–1155.
  21. 21. Fliege J. and Vicente L. N. Multicriteria approach to bilevel optimization. Journal of Optimization Theory and Applications. 2006, 131:209–225.
  22. 22. Gravel M. Martel J. M. Nadeau R. Price W. and Tremblay R. A multicriterion view of optimal resource allocation in job-shop production. European Journal of Operational Research. 1992, 61(1-2):230–244.
  23. 23. Hong T. S. Craft D. L. Carlsson F. and Bortfeld T. R. Multicriteria optimization in intensity-modulated radiation therapy treatment planning for locally advanced cancer of the pancreatic head. International Journal of Radiation Oncology* Biology* Physics. 2008, 72(4):1208–1214. pmid:18954714
  24. 24. Jahn J. Kirsch A. and Wagner C. Optimization of rod antennas of mobile phones. Mathematical Methods of Operations Research. 2004, 59:37–51.
  25. 25. Leschine T. M. Wallenius H. and Verdini W. A. Interactive multiobjective analysis and assimilative capacity-based ocean disposal decisions. European Journal of Operational Research. 1992, 56(2):278–289.
  26. 26. Stewart T. Bandte O. Braun H. Chakraborti N. Ehrgott M. Göbelt M. Jin Y. Nakayama H. Poles S. and Di Stefano D. Real-world applications of multiobjective optimization. Multiobjective Optimization: Interactive and Evolutionary Approaches. 2008, pages 285–327.
  27. 27. Jahn J. Scalarization in vector optimization. Mathematical Programming. 1984, 29:203–218.
  28. 28. Luc D. T. Scalarization of vector optimization problems. Journal of Optimization Theory and Applications. 1987, 55:85–102.
  29. 29. Soleimani B. and Tammer C. Concepts for approximate solutions of vector optimization problems with variable order structures. Vietnam Journal of Mathematics. 2014, 42:543–566.
  30. 30. Bonnel H. Iusem A. N. and Svaiter B. F. Proximal methods in vector optimization. SIAM Journal on Optimization. 2005, 15(4):953–970.
  31. 31. Drummond L. G. and Svaiter B. F. A steepest descent method for vector optimization. Journal of Computational and Applied Mathematics. 2005, 175(2):395–414.
  32. 32. Fukuda E. H. and Drummond L. M. G. A survey on multiobjective descent methods. Pesquisa Operacional. 2014, 34:585–620.
  33. 33. Ansary M. A. and Panda G. A modified quasi-Newton method for vector optimization problem. Optimization. 2015, 64(11):2289–2306.
  34. 34. Bello Cruz J. A subgradient method for vector optimization problems. SIAM Journal on Optimization. 2013, 23(4):2169–2182.
  35. 35. Fliege J. Drummond L. G. and Svaiter B. F. Newton’s method for multiobjective optimization. SIAM Journal on Optimization. 2009, 20(2):602–626.
  36. 36. Qu S. Goh M. and Chan F. T. Quasi-Newton methods for solving multiobjective optimization. Operations Research Letters. 2011, 39(5):397–399.
  37. 37. Dai Y.H. and Yuan Y.X. Convergence properties of the fletcher-reeves method. IMA Journal of Numerical Analysis. 1996, 16(2):155–164.
  38. 38. Gonçalves M. L. and Prudente L. On the extension of the Hager–Zhang conjugate gradient method for vector optimization. Computational Optimization and Applications. 2020, 76(3):889–916.
  39. 39. Gonçalves M. L. Lima F. and Prudente L. A study of Liu-Storey conjugate gradient methods for vector optimization. Applied Mathematics and Computation. 2022, 425:127099.
  40. 40. Yahaya J. Arzuka I. and Isyaku M. Descent modified conjugate gradient methods for vector optimization problems. Bangmod International Journal of Mathematical and Computational Science. 2023, 9:72–91.
  41. 41. Andrei N. New accelerated conjugate gradient algorithms as a modification of Dai–Yuan’s computational scheme for unconstrained optimization. Journal of Computational and Applied Mathematics. 2010, 234(12):3397–3410.
  42. 42. Jian J. Chen Q. Jiang X. Zeng Y. and Yin J. A new spectral conjugate gradient method for large-scale unconstrained optimization. Optimization Methods and Software. 2017, 32(3):503–515.
  43. 43. He Q.R. Chen Q.R. and Li S.J. Spectral conjugate gradient methods for vector optimization problems. Computational Optimization and Applications. 2023, 86: pages 457–489.
  44. 44. Cheng W. A two-term PRP-based descent method. Numerical Functional Analysis and Optimization. 2007, 28(11-12):1217–1230.
  45. 45. Drummond L. and Iusem A. N. A projected gradient method for vector optimization problems. Computational Optimization and Applications. 2004, 28(1):5–29.
  46. 46. Fliege J. and Svaiter B. F. Steepest descent methods for multicriteria optimization. Mathematical Methods of Operations Research. 2000, 51(3):479–494.
  47. 47. Luc D. T. Theory of vector optimization. Springer, 1989.
  48. 48. Lucambio Pérez L. and Prudente L. A wolfe line search algorithm for vector optimization. ACM Transactions on Mathematical Software (TOMS). 2019, 45(4):1–23.
  49. 49. Gilbert J. C. and Nocedal J. Global convergence properties of conjugate gradient methods for optimization. SIAM Journal on Optimization. 1992, 2(1):21–42.
  50. 50. Birgin E. G. and Martínez J. M. Practical augmented Lagrangian methods for constrained optimization. SIAM, 2014.
  51. 51. Lovison A. Singular continuation: Generating piecewise linear approximations to pareto sets via global analysis. SIAM Journal on Optimization. 2011, 21(2):463–490.
  52. 52. Schütze O. Lara A. and Coello C. C. The directed search method for unconstrained multi-objective optimization problems. Proceedings of the EVOLVE–A Bridge Between Probability, Set Oriented Numerics, and Evolutionary Computation. 2011, pages 1–4.
  53. 53. Dolan E. D. and Moré J. J. Benchmarking optimization software with performance profiles. Mathematical Programming. 2002, 91:201–213.
  54. 54. Huband S. Hingston P. Barone L. and While L. A review of multiobjective test problems and a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation. 2006, 10(5):477–506.
  55. 55. Schütze O. Laumanns M. Coello Coello C. A. Dellnitz M. and Talbi E.-G. Convergence of stochastic search algorithms to finite size pareto set approximations. Journal of Global Optimization. 2008, 41:559–577.
  56. 56. Miglierina E. Molho E. and Recchioni M. C. Box-constrained multi-objective optimization: a gradient-like method without “a priori” scalarization. European Journal of Operational Research. 2008, 188(3):662–682.
  57. 57. Hillermeier C. Generalized homotopy approach to multiobjective optimization. Journal of Optimization Theory and Applications. 2001, 110(3):557–583.
  58. 58. Kim I. Y. and De Weck O. L. Adaptive weighted-sum method for bi-objective optimization: Pareto front generation. Structural and Multidisciplinary Optimization. 2005, 29:149–158.
  59. 59. Toint P. Test problems for partially separable optimization and results for the routine pspmin. the university of namur, department of mathematics. Technical Report, Belgium, Tech. Rep, 1983.
  60. 60. Moré J. J. Garbow B. S. and Hillstrom B. S. Testing unconstrained optimization software. ACM Transactions on Mathematical Software (TOMS). 1981, 7(1):17–41.
  61. 61. Preuss M. Naujoks B. Rudolph G. Pareto set and emoa behavior for simple multimodal multiobjective functions. In PPSN, pages 513–522. Springer, 2006.