Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A spectral Fletcher-Reeves conjugate gradient method with integrated strategy for unconstrained optimization and portfolio selection

  • Nasiru Salihu,

    Roles Conceptualization, Methodology, Writing – original draft

    Affiliation Department of Mathematics, Faculty of Sciences, Modibbo Adama University, Yola, Nigeria

  • Sulaiman M. Ibrahim,

    Roles Formal analysis, Resources, Software, Writing – review & editing

    Affiliations School of Quantitative Sciences, Universiti Utara Malaysia, Sintok, Kedah, Malaysia, Faculty of Education and Arts, Sohar University, Sohar, Oman

  • P. Kaelo,

    Roles Formal analysis, Resources, Validation, Writing – original draft

    Affiliation Department of Mathematics, University of Botswana, Private Bag UB00704, Gaborone, Botswana

  • Issam A.R. Moghrabi ,

    Roles Funding acquisition, Supervision, Validation, Writing – review & editing

    i.moghrabi@ktech.edu.kw

    Affiliation Information Systems and Technology Department, Kuwait Technical College, Abu-Halifa, Kuwait

  • Elissa Nadia Madi

    Roles Supervision, Validation, Visualization, Writing – review & editing

    Affiliation Faculty of Informatics and Computing, Universiti Sultan Zainal Abidin, Terengganu, Malaysia

Abstract

The spectral conjugate gradient (SCG) technique is highly efficient in addressing large-scale unconstrained optimization challenges. This paper presents a structured SCG approach that combines the Quasi-Newton direction and an extended conjugacy condition. Drawing inspiration from the Fletcher-Reeves conjugate gradient (CG) parameter, this method is tailored to improve the general structure of the CG approach. We rigorously establish the global convergence of the algorithm for general functions, using criteria from a Wolfe-line search. Numerical experiments performed on some unconstrained optimization problems highlight the superiority of this new algorithm over certain CG methods with similar characteristics. In the context of portfolio selection, the proposed method extended to address the problem of stock allocation, ensuring optimized returns while minimizing risks. Empirical evaluations demonstrate the efficiency of the method, demonstrating significant improvements in computational efficiency and optimization outcomes.

1 Introduction

In this study, we explore the theoretical analysis and numerical performance of nonlinear conjugate gradient (CG) algorithm for solving minimization problem of the form:

(1.1)

where is a smooth function with g(x) = ∇f(x) as its gradient [1]. The CG algorithm is one of the most widely used line search procedures for solving (1.1) due to its favorable theoretical properties and robust computational performance on large-scale minimization functions [24]. Recently, numerous studies have extended CG methods to solve real-world application problems, such as portfolio selection in the context of portfolio optimization. Portfolio optimization is a critical area of research in finance, aiming to balance risk and return while adhering to various constraints. In this domain, the mathematical formulation of portfolio optimization often leads to large-scale unconstrained optimization problems, which require efficient numerical methods [5,6].

Usually line search is the most important component in the CG iterative scheme, beginning with a starting point , the algorithm computes a sequence of iterative points via

(1.2)

with the step-size computed via a suitable line search procedure such as the standard Wolfe strategy that requires to satisfy

(1.3)

and

(1.4)

The inequality (1.4) contains the curvature condition that assures a sufficient increase of g(x). However, the might not converge to the minimum of the cost functions, potentially affecting the overall convergence findings. To avoid this instances, a modified curvature condition was therefore defined as

(1.5)

and when combined with (1.3) produces what is known as the strong Wolfe (SW) line search, with δ < σ < 1, where

The second key element of the CG algorithm is the search direction and is computed using

(1.6)

with the scalar parameter being the CG coefficient that distinguishes between different CG formulas [7]. Some of the famous and classical CG schemes include Fletcher and Reeves (FR)[8], Polak, Ribière and Polyak (PRP) [9,10], Hestenes and Stiefel (HS) [11], and Dai and Yuan (DY) [12], with the following formulas for :

(1.7)

where and represents the norm [13]. It is generally believed that the classical PRP and HS CG algorithms are very efficient in practical computations. This is due to the restart feature associated with these methods when jamming occurs. However, these methods are not guaranteed to converge for general functions, with the PRP formula lacking to produce a descent direction [14].

Thus, the standard conjugacy condition is often used in the convergence analysis of CG algorithms. The CG methods that make use of this technique depend largely on exactly computing , and this requirement is costly for large-scale models [15]. Therefore, Perry [16] incorporated a second-order information into to implement many CG schemes that require (1.3) and (1.5) in their convergence. This is made possible by the fact that if the present iterate is in the vicinity of the local minimizer, and the cost function behaves closely like a quadratic function, then the quasi-Newton direction is the suitable search direction to follow [17]. Later, Dai and Liao [18] considered the following general conjugacy condition

(1.8)

from which they obtained a new formula for the CG method as

(1.9)

where . To prove the convergence results for general functions, they specified (1.9) as

(1.10)

By considering the technique in [18], the authors of [19] introduced an extension of the PRP method in the form of

(1.11)

It is obvious that when t = 0, (1.9) and (1.11) will reduce to the classical HS and PRP CG methods defined in (1.7). Similarly, taking the value of t as and Hager and Zhang [20] and Andrei [21] extended the above idea to construct accelerated versions of [18] as

(1.12)

and

(1.13)

respectively. The formula (1.12) satisfies the sufficient descent condition, and is globally convergent for general functions under the restriction

(1.14)

Experimental results has demonstrated the efficiency and robustness of the method.

Spectral CG directions are other important modifications of the CG directions which aim to improve the theoretical features of classical CG formula as well as their numerical performance. The techniques are based on studies presented by Barzilai and Borwein[22], and Raydan [23]. The method has the structure

where denote the spectral parameter. For nice contribution on the subject, the reader is referred to [2429]. Applying this technique, Liu et al. [30], using , constructed the spectral parameter

and examined the convergence by assuming that is bounded, further illustrating the computational efficiency with a set of minimization functions. However, these spectral parameters do not consider conjugacy condition given by (1.8), and Quasi-Newton technique in its formulation, a feature that makes CG method possess a quadratic convergence property. Following the approach suggested in [21], Faramarzi and Amini [31] constructed a double truncated structured spectral parameter using some features of the famous quasi-Newton procedure and the standard secant equation by Jian et al. [32]. To guarantee that defined in the algorithm possesses the sufficient descent property, specifically

(1.15)

they used the truncated double bounded property, that is, is bounded below and above. Using the above technique with modified secant equation, [33] also suggested a similar spectral structure. These modifications show that, the spectral methods in [3133] all possess (1.15) and are globally convergent under the lower bound and upper bound respectively.

Inspired by the above discussion and considering the extended conjugacy condition with the excellent theoretical features of quasi-Newton schemes, this study suggests a spectral FR CG algorithm for solving (1.1), particularly, when the problems are of large dimensions. The proposed search direction satisfies the descent condition and converges globally via strong Wolfe conditions. Using a set of benchmark functions, our computational experiments demonstrate that the method is very promising compared to some modified CG algorithms. The subsequent section details every formulation step of our proposed spectral formula with its algorithm. For Sect 3, the convergence results is established under suitable assumptions, and we demonstrate the numerical performance of the proposed method on benchmark functions with application to portfolio selection in Sect 4. The final section presents our conclusions.

2 Spectral algorithm and motivation

To provide a better CG algorithm, the spectral parameter can be incorporated into the structure of the direction of search in (1.6) such that it satisfies (1.8). To achieve this, consider the directions

(2.1)

and

(2.2)

with being the approximation of the Hessian matrix The property that ensures quadratic convergence of a CG method, assuming that exists, is characterized by the secant equation

(2.3)

Equating (2.1) with (2.2) gives

Multiplying the above equation by , and using (2.3), we have

which gives that

Simplifying in terms of gives

(2.4)

Similarly, multiplying (2.1) with , yields

Equating with (1.8) gives

Re- arranging, implies

(2.5)

Remark 2.1. It follows from (2.5) that if t = 1, then , which implies that, inherited the excellent convergence condition of the quasi-Newton algorithm and further satisfy the generalized D-L conjugacy property.

Therefore, to ensure the sufficient descent condition, the optimal spectral (DQSFR) can be determined as

(2.6)

The DQSFR algorithm is given below.

Algorithm 1.

Step 1: Given and 0 < δ < σ < 1, set k = 0 and .

Step 2: Check: If , then terminate. Else

Step 3: Select along the direction such that (1.3) and (1.5) are satisfied.

Step 4: Let Compute by (2.6) and from (1.7). Set k = k + 1.

Step 5: Compute from (2.1).

Step 6: Go to Step 2 with the next k.

Next we show that the spectral method satisfies the sufficient descent condition.

Theorem 2.2. Suppose follows from (2.6), with DQSFR Algorithm generating sequences and Then there exists a constant ρ > 0 satisfying

(2.7)

Proof. Pre-multiplying (2.1) by , we obtain

(2.8)

Obtaining from (1.7) and using it with (2.8), we get

From the definition of in (2.6), we know that Therefore, we have

(2.9)

Al-Baali [34] showed that (2.9) satisfies (2.7) using the induction hypothesis obtained from [35, Theorem 4.2]

(2.10)

where Using (2.9) implies

Applying (1.5), we have

Consequently, using (2.10) on the last term of the above inequality yields

Hence, denoting we get

Since this implies that the descent condition (2.7) holds when with σ ∈ (0, 0.4]. This motivated the derivation of DQSFR formula in (2.6).

3 Convergence analysis

The convergence proof of the SCG method relies on certain assumptions, such as the Lipschitz continuity of the gradient of the target function. While these assumptions are standard in optimization theory, they may not always hold in practical applications, particularly for non-smooth functions encountered in real-world problems. This limitation can affect the theoretical guarantees of convergence and may restrict the method’s direct applicability in such cases.

In this study, we consider the following assumptions that are crucial in achieving the convergence results of the proposed method.

Assumption 3.1.

  1. 1. Let f(x) represent a function and define a level set where . Then, f(x) is bounded below on and there exists a positive constant b satisfying
  2. 2. The function f is a smooth function in some neighbourhood Γ of , and its gradient is Lipschitz continuous. This implies, for some L, we have
    where L denotes a positive constant.
    Based on Assumption (3.1), there is a positive constant γ satisfying:

Using sufficient descent criteria (2.7) and the line search technique (1.3), it is clear to see that decreases monotonically. Owing to the fact that the function f is bounded from below, there is a parameter f*, satisfying

(3.1)

where f* is a constant. Thus, Lemma 3.2 that follows by [36] is very vital in the convergence analysis of the new method.

Lemma 3.2. Let the sequence be generated by the proposed DQSFR Algorithm where is a descent spectral direction with a chosen that fulfills the weak Wolfe conditions. Then

Note that that satisfies (1.5) also satisfies (1.4). Hence, using strong Wolfe search condition, we can see that Lemma 3.2 also holds true.

Proof. From Assumption 3.1 and curvature condition (1.4), we get

Then, we have

Using relation (1.3), we have

From the above inequality, and Assumption 3.1, we have

Lemma 3.3. Let the sequences and be generated via Algorithm 1, where , and Assumption 3.1 holds true. Suppose the DQSFR direction satisfies the descent condition and the step size is chosen to fulfill the SW conditions (1.3) and (1.5). Then, either

(3.2)

or

(3.3)

holds.

Proof. Since and σ ∈ (0, 0.4], then, generated by Algorithm 1 is a descent direction for all k, i.e., Applying (2.1), we get

So, we obtain

(3.4)

Also, since the direction is descent, by (2.1), we further obtain that

This implies,

(3.5)

Furthermore, taking into account the fact that the highest value σ would be able to achieve in (1.5) when equality holds in (2.9) with , we get that Therefore, (1.5) can be written as

Combining with (3.5) gives

(3.6)

Recall that for ab ∈ ℝ, is always true. If we apply the above condition where and on relation (3.6), then

By denoting , the above inequality can be expressed as

(3.7)

In a similar process, applying (3.4) will produce

That is,

This and (3.7) imply that

(3.8)

where But from Lemma 3.2, we know that

Now, if is not true, then

Therefore, from (3.8), we get

It is evident that, the above description with Lemma 3.2 demonstrates that the required outcome is attained, thereby validating (3.3). ∎

Corollary 3.4. Let’s assume

(3.9)

is true using conclusion of Lemma 3.3. Then this implies .

Proof. Assume there exits a constant 𝜀 satisfying k ≥ 1. Then, Lemma 3.3 demonstrates that (3.3) is satisfied and thus, we obtain

which contradicts (3.9) and thus, (3.2) must hold. ∎

Lemma 3.5. Let follow from the proposed Algorithm 1. Suppose Assumption 3.1 is true and there is a constant ϱ > 0 satisfying , then is bounded.

Theorem 3.6. Suppose that the sequence is generated by the proposed DQSFR Algorithm where is a descent spectral direction with a seleted fulfilling the SWP criteria, then the sequence generated by DQSFR Algorithm satisfies

(3.10)

Proof. Suppose, by contradiction, that conclusion (3.10) is not true. Then there exits such that for all k ≥ 0. From (1.7) and Assumption 3.1, we get

Now, applying (2.6) and together with and implies

Combing the above inequalities with (2.1), we conclude that

From the relation, we have

and this completes the proof of Corollary 3.4. ∎

Remark 3.7 Based on the analysis in [37,38] and the fact that and it is as a matter of fact, not too restrictive to select the parameter ϱ.

4 Numerical results

In this section, we demonstrate the computational behavior of the defined DQSFR Algorithm and compare it with other notable CG methods to demonstrate its computational efficiency. All the other algorithms chosen have global convergence with their respective Wolfe line search and performance comparison is measured based on number of iterations, function evaluations, and CPU time. The obtained numerical results will support the excellent theoretical results obtained in the previous section. For the numerical evaluations, the study considered a set of 146 small and large-scale unconstrained optimization test functions (see Tables 15) from [35] and [39] with dimension ranging from 2 to 100,000. The algorithms used for investigating the performance are as follows:

  • The proposed DQSFR Algorithm where σ = 0.5 and δ = 0.0001 in (1.3) and (1.5).
  • The classical FR algorithm where direction + and follows from (1.7) with the Wolfe parameters σ = 0.5 and δ = 0.0001.
  • CG_DESCENT: Hager and Zhang algorithm [40] with computed as (1.6) where follows from (1.14) with the Wolfe parameters σ = 0.5 and δ = 0.001.
  • MSFR: Algorithm of Du and Chen[41] where follows from (1.6) and as defined in (1.7) with the Wolfe parameters σ = 0.01 and δ = 0.001.
  • MSTCG Algorithm of Amini and Faramarzi [42] with direction , where , ψ = 0.2, with the Wolfe parameters σ = 0.1 and δ = 0.01 .
  • TSCG Faramarzi and Amini Algorithm [31] where and with the Wolfe parameters σ = 0.1 and δ = 0.01.
  • MF spectral Algorithm of Hatem et al. [43] whose direction is computed as (1.6) where follows from [43] with the Wolfe parameters σ = 0.6 and δ = 0.01 .
thumbnail
Table 1. Performance Results based on NOI, NOF, and CPUT.

https://doi.org/10.1371/journal.pone.0313772.t001

thumbnail
Table 2. Performance Results based on NOI, NOF, and CPUT.

https://doi.org/10.1371/journal.pone.0313772.t002

thumbnail
Table 3. Performance Results based on NOI, NOF, and CPUT.

https://doi.org/10.1371/journal.pone.0313772.t003

thumbnail
Table 4. Performance Results based on NOI, NOF, and CPUT.

https://doi.org/10.1371/journal.pone.0313772.t004

thumbnail
Table 5. Numerical result of DQSFR, CG-DESCENT, MSTCG, MSFR, FR, MF, and TSCG methods.

https://doi.org/10.1371/journal.pone.0313772.t005

All codes for the computational procedures are written in MATLAB R2023a programming software and implemented on an Intel(R) Core i7 PC with 8GB RAM, 2.90 GHz CP and XP operation system. The iteration process would be terminated if or any of the following holds:

  1. Iterations exceed 2000 without obtaining any satisfying .
  2. If the direction is not descent.
  3. Failure occurs due to insufficient memory during code execution.

If any of the above occur, a failure will be declared at the point and would be denoted by (**).

4.1 Computational efficiency of DQSFR compared with existing methods

The computational efficiency of the CG algorithms makes it particularly well-suited for solving large-scale unconstrained optimization problems where explicit storage of the Hessian matrix is prohibitive or where matrix-vector products can be computed efficiently. This is usually associated to low memory requirement, iterative nature, or convergence behaviour of the algorithm. In this study, the computational efficiency of the proposed DQSFR method is evaluated based on three metrics including number of iterations (NOI), function evaluations (NOF), and computational time (CPUT) required to achieve a tolerance level of 10–6 or other termination criteria. All computations are carried out under strong Wolfe strategies and the results are compared with TSCG, MF, FR, MSFR, MSTCG, and CG-DESCENT using their default parameter values for the Wolfe line search. The comprehensive numerical results is demonstrated in Tables 1-5.

Based on our findings as demonstrated in Tables 1-5, we noted that the percentage of successfully solved problems by any CG algorithm depends on several factors including the type of problem being solved, the choice of initial guess, the implementation of the CG algorithm, the accuracy requirements, the selection of line search parameters, and the convergence criteria used.

4.2 Accuracy

The accuracy of the proposed DQSFR algorithm is evaluated by juxtaposing its computed solutions with those generated by reference CG algorithms possessing similar attributes. Accuracy is gauged through the success and failure rates of the computed solutions, illustrating a notable correlation with the reference solutions, as delineated in Table 6.

thumbnail
Table 6. Assessing the overall accuracy of the solutions obtained by the algorithms.

https://doi.org/10.1371/journal.pone.0313772.t006

Table 6 describes the % of successfully solved problems recorded by each algorithm. In the case of well-conditioned benchmark functions, it is noted that the CG algorithms frequently attains high success rates, typically reaching solution points efficiently within a moderate number of iterations. However, for ill-conditioned problems, convergence may be slower, leading to a decrease in success rates. Based on the results, it is obvious to notice that the proposed method achieved 100% success rate which implies that its algorithm has been able to solve all problems considered and thus, demonstrate its efficiency and numerical stability for large scale unconstrained optimization problems. The success of DQSFR might be associated to the conjugacy relaxation of double truncated property associated with earlier spectral methods. The other algorithms also considered achieved higher % of success rate with the classical FR method recording the highest percentage (19.2%) of failure rate. This can be attributed to the search direction failing to navigate effectively through the landscape and thus, result in stagnation or slow convergence towards a sub-optimal solution.

4.3 Convergence behavior

The convergence behavior of the DQSFR method is examined in this section, considering factors such as the nature of the objective function, the dimensionality of the problem and the algorithmic parameters. A well-established tool introduced by Dolan and Moré [44] is utilized to analyze the efficiency of CG algorithms through performance profiles. These profiles evaluate algorithms based on metrics like the number of iterations (NOI), function evaluations (NOF), or computational time (CPUT) required to achieve a specified level of accuracy.

The performance profile defines a performance ratio for each problem p ∈ P and solver s ∈ S across a set of test problems and solvers:

where the cumulative distribution function (CDF) of these ratios is plotted to compare the relative performance of algorithms. The x-axis represents the performance ratio, while the y-axis shows the fraction of problems solved at or below a given ratio. The performance profile are displayed as follow:

Figs 1, 2, and 3 display the NOI, NOF, and CPUT performance profiles, respectively, comparing the DQSFR method with CG-DESCENT, MSTCG, MSFR, FR, MF, and TSCG. The results indicate that the DQSFR method outperforms most of the algorithms across the test problems. Specifically, DQSFR achieves the highest success rate, solving 100% of the problems, which reflects its robustness and scalability. However, it is important to clarify that this success rate pertains to the specific test set and does not imply universal convergence.

Further analysis shows that the curves of CG-DESCENT, MSTCG, and TSCG methods compete closely with DQSFR at certain points. Meanwhile, the FR, MSFR, and MF methods exhibit lower performance, with curves consistently below the leading methods, indicating their limited effectiveness for the problems considered.

The observed behavior of DQSFR on high-dimensional problems is influenced by the problem characteristics. Instances of non-convergence for competing methods often arise from numerical difficulties, which are less impactful for DQSFR due to its efficient handling of such cases.

Based on the results in Tables 16 and the performance profiles in Figs 13, the DQSFR method demonstrates a compelling combination of numerical stability, efficiency, and scalability, making it a robust choice for large-scale unconstrained optimization.

4.4 Application to portfolio selection

Portfolio selection refers to a collection of stocks or assets owned by investors. An investor would always want to find the best way of allocating their portfolio so as to maximize profit or minimize risk [6,45]. Alternatively, one may look at maximizing the returns while incurring some risk [46,47]. In recent years, the application of conjugate gradient methods to portfolio selection problems has gained significant attention. Several studies have explored this approach, showcasing its potential in solving optimization problems within the context of portfolio management. Although classical CG methods are widely used due to their simplicity, low memory requirements, and strong convergence properties, they face significant limitations in the context of portfolio optimization. These limitations include sensitivity to ill-conditioning, slow convergence in highly irregular risk-return landscapes, and difficulties in handling non-smooth or dynamic constraints commonly encountered in real-world portfolios. As a result, the effectiveness of classical CG methods in practical portfolio optimization is often hindered. Therefore, there is a need for new CG methods that can address these challenges, providing improved computational efficiency and robustness for solving portfolio optimization problems [5,4850].

In this study, we apply our presented method, a conjugate gradient method, together with the other competing methods, to solve a problem of risk management of a portfolio of stocks. Given a stork , its return at time t, denoted by is obtained as

where and denote the closing price at time t and t–1, respectively. Using the returns , the mean returns, expected returns and covariance between stocks can be calculated. In solving a portfolio problem, one requires to solve the risk-averse portfolio problem

where represents the portfolio investment weighted proportions of the stocks and V is a q × q covariance matrix of the stocks. By setting

the above constrained optimization problem can be converted into an unconstrained optimization problem

(4.1)

which we can now easily solve using the new method and the other earlier conjugate gradient methods used for comparison.

For numerical experiments, we use a portfolio of ten (10) stocks (i.e. q = 10). We use weekly closing prices of stocks over a period spanning from 10 July 2023 to 10 July 2024 and these are obtained from the database https://www.investing.com/. The stocks chosen are presented in Table 7. Tables 8 and 9 show the mean of returns of the stocks and covariance between the stocks, respectively. We solve the unconstrained problem (4.1) with the covariance matrix V as in Table 9. We solve the problem starting with initial points and (0.02, 0.02, …, 0.02), where

The resulting stock allocations for each of these initial points are summarized in Table 10, leading to a portfolio risk of and an expected portfolio return of 0.000747. Notably, the allocation includes a negative weight for PLTR (-1.9%), indicating a short-selling position, where the investor sells a stock they do not own, often borrowed from a broker. This allocation strategy demonstrates the flexibility of the proposed method to handle diverse investment scenarios. The results also highlight that the proposed method consistently achieves feasible and efficient solutions, demonstrating its superiority over competing methods by attaining a balance between lower risk and higher return. However, its sensitivity to initial points, which is common among iterative optimization methods, suggests the potential for further improvement, such as the inclusion of regularization techniques to enhance robustness. These findings underline the practical utility of the proposed method in portfolio optimization scenarios.

5 Conclusion

In this work, we apply the second strong Wolfe line search criteria to show the sufficient descent of a spectral FR (DQSFR) method for large-scale unconstrained optimization problems. The method was able to relax the double truncated property associated with earlier spectral methods. This was achieved by utilizing nice convergence property of the quasi-Newton method and the extended conjugacy condition. Similarly, convergence of the new scheme is achieved without enforcing any bounded condition. Thus, to evaluate the efficiency of DQSFR method with TSCG, MF, MSTCH, CG-DESCENT, FR, and MSFR conjugate gradient algorithms, the study performed an extensive numerical experiments with the Wolfe conditions. Findings from the numerical computation shows that the proposed DQSFR method offers a compelling combination of adaptability, numerical stability, efficiency, and scalability, making it a preferred choice for solving large-scale unconstrained optimization problems in various scientific and engineering applications. In the realm of portfolio selection, the proposed method was expanded to tackle the challenge of stock allocation, maximizing returns while minimizing risks. The method utilizes the extended conjugacy condition and the favorable convergence characteristics of the quasi-Newton method to ease the double truncated property seen in previous spectral methods

References

  1. 1. Hamel N, Benrabia N, Ghiat M, Guebbai H. A new hybrid conjugate gradient algorithm based on the Newton direction to solve unconstrained optimization problems. J Appl Math Comput. 2023;69(3):2531–48.
  2. 2. Hafaidia I, Guebbai H, Al-Baali M, Ghiat M. A new hybrid conjugate gradient algorithm for unconstrained optimization. Vestnik Udmurtskogo Universiteta Matematika Mekhanika Komp’yuternye Nauki 2023;33(2):348–64.
  3. 3. Malik M, Mamat M, Abas SS, Sulaiman IM. Performance analysis of new spectral and hybrid conjugate gradient methods for solving unconstrained optimization problems. IAENG Int J Comput Sci. 2021;48(1).
  4. 4. Salihu N, Kumam P, Awwal AM, Arzuka I, Seangwattana T. A structured Fletcher-Reeves spectral conjugate gradient method for unconstrained optimization with application in robotic model. Oper Res Forum. 2023;4(4):Paper No. 81.
  5. 5. Awwal AM, Sulaiman IM, Malik M, Mamat M, Kumam P, Sitthithakerngkiet K. A spectral RMIL conjugate gradient method for unconstrained optimization with applications in portfolio selection and motion control. IEEE Access. 2021;9:75398–414.
  6. 6. Diphofu T, Kaelo P, Tufa AR. A modified nonlinear conjugate gradient algorithm for unconstrained optimization and portfolio selection problems. RAIRO-Oper Res 2023;57(2):817–35.
  7. 7. Malik M, Abas SS, Mamat M, Mohammed IS. A new hybrid conjugate gradient method with global convergence properties. Int J Adv Sci Technol. 2020;29(5):199–210.
  8. 8. Fletcher R, Reeves CM. Function minimization by conjugate gradients. Comput J. 1964;7:149–54.
  9. 9. Polyak BT. A general method for solving extremal problems. Dokl Akad Nauk SSSR. 1967;174:33–6.
  10. 10. Polak E, Ribiere G. Note sur la convergence de methodes de directions conjuguees. USSR Comput Math Math Phys. 1969;9(4):94–112.
  11. 11. Hestenes MR, Stiefel E. Methods of conjugate gradients for solving linear systems. J Res Natl Bur Stan 1952;49(6):409.
  12. 12. Dai YH, Yuan Y. A nonlinear conjugate gradient method with a strong global convergence property. SIAM J Optim 1999;10(1):177–82.
  13. 13. Mohammed IS, Mamat M, Abashar A, Rivaie M, Salleh Z. A modified nonlinear conjugate gradient method for unconstrained optimization. AMS. 2015;9:2671–82.
  14. 14. Hager WW, Zhang H. A survey of nonlinear conjugate gradient methods. Pacific J Optimiz. 2006;2(1):35–58.
  15. 15. Salihu N, Odekunle MR, Waziri MY, Halilu AS, Salihu S. A dai-liao hybrid conjugate gradient method for unconstrained optimization. Int J Ind Optim 2021;2(2):69.
  16. 16. Perry A. Technical note—a modified conjugate gradient algorithm. Oper Res 1978;26(6):1073–8.
  17. 17. Salihu N, Kumam P, Sulaiman IM, Seangwattana T. An efficient spectral minimization of the Dai-Yuan method with application to image reconstruction. MATH 2023;8(12):30940–62.
  18. 18. Dai YH, Liao LZ. New conjugacy conditions and related nonlinear conjugate gradient methods. Appl Math Optim 2001;43(1):87–101.
  19. 19. Babaie-Kafaki S, Ghanbari R. A descent family of Dai–Liao conjugate gradient methods. Optimiz Methods Softw 2013;29(3):583–91.
  20. 20. Hager WW, Zhang H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J Optim 2005;16(1):170–92.
  21. 21. Andrei N. New accelerated conjugate gradient algorithms as a modification of Dai–Yuan’s computational scheme for unconstrained optimization. J Comput Appl Math 2010;234(12):3397–410.
  22. 22. barzilai J, Borwein JM. Two-point step size gradient methods. IMA J Numer Anal 1988;8(1):141–8.
  23. 23. Raydan M. The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J Optim 1997;7(1):26–33.
  24. 24. Chen F, Zhang J, Ye J, Chen Z. A descent modified HS conjugate gradient method with an optimal property. RAIRO Oper Res 2023;57(2):541–50.
  25. 25. Jian J, Yang L, Jiang X, Liu P, Liu M. A spectral conjugate gradient method with descent property. Mathematics 2020;8(2):280.
  26. 26. Salihu N, Kumam P, Awwal AM, Sulaiman IM, Seangwattana T. The global convergence of spectral RMIL conjugate gradient method for unconstrained optimization with applications to robotic model and image recovery. PLoS One 2023;18(3):e0281250. pmid:36928212
  27. 27. Salihu N, Odekunle M, Waziri M, Halilu A. A new hybrid conjugate gradient method based on secant equation for solving large scale unconstrained optimization problems. Iranian J Optimiz. 2020;12(1):33–44.
  28. 28. Salihu N, Odekunle MR, Saleh AM, Salihu S. A Dai-Liao hybrid Hestenes-Stiefel and Fletcher-Revees methods for unconstrained optimization. Int J Ind Optim 2021;2(1):33.
  29. 29. Zhongbo S, Xue C, Yingying G, Yuncheng G, Yue S. Two modified PRP conjugate gradient methods and their global convergence for unconstrained optimization. In: 2017 29th Chinese Control and Decision Conference (CCDC). 2017. p. 786–90.
  30. 30. Liu JK, Feng YM, Zou LM. A spectral conjugate gradient method for solving large-scale unconstrained optimization. Comput Math Appl 2019;77(3):731–9.
  31. 31. Faramarzi P, Amini K. A spectral three-term Hestenes–Stiefel conjugate gradient method. 4OR-Q J Oper Res 2020;19(1):71–92.
  32. 32. Jian J, Chen Q, Jiang X, Zeng Y, Yin J. A new spectral conjugate gradient method for large-scale unconstrained optimization. Optimiz Methods Softw 2016;32(3):503–15.
  33. 33. Faramarzi P, Amini K. A modified spectral conjugate gradient method with global convergence. J Optim Theory Appl 2019;182(2):667–90.
  34. 34. Al-Baali M. Descent property and global convergence of the Fletcher—Reeves method with inexact line search. IMA J Numer Anal 1985;5(1):121–4.
  35. 35. Andrei N. Nonlinear conjugate gradient methods for unconstrained optimization. Springer optimization and its applications, vol. 158. Cham: Springer; 2020. https://doi.org/https://doi.org/10.1007/978-3-030-42950-8
  36. 36. Zoutendijk G. Nonlinear programming, computational methods. In: Abadie J. editor. Integer and nonlinear programming, North-Holland, Amsterdam. 1970. p. 37–86.
  37. 37. Salihu N, Kumam P, Sulaiman IM, Arzuka I, Kumam W. An efficient Newton-like conjugate gradient method with restart strategy and its application. Math Comput Simulat. 2024;226:354–72.
  38. 38. Salihu N, Kumam P, Sulaiman IM, Kumam W. Some combined techniques of spectral conjugate gradient methods with applications to robotic and image restoration models. Numer Algorithms. 2024;1:1–41.
  39. 39. Jamil M, Yang XS. A literature survey of benchmark functions for global optimisation problems. IJMMNO 2013;4(2):150.
  40. 40. Hager WW, Zhang H. Algorithm 851. ACM Trans Math Softw 2006;32(1):113–37.
  41. 41. Du S, Chen Y. Global convergence of a modified spectral FR conjugate gradient method. Appl Math Comput 2008;202(2):766–70.
  42. 42. Amini K, Faramarzi P. Global convergence of a modified spectral three-term CG algorithm for nonconvex unconstrained optimization problems. J Comput Appl Math. 2023;417:114630.
  43. 43. Mrad H, Fakhari SM. Optimization of unconstrained problems using a developed algorithm of spectral conjugate gradient method calculation. Math Comput Simulat. 2024;215:282–90.
  44. 44. Dolan ED, Moré JJ. Benchmarking optimization software with performance profiles. Math Program 2002;91(2):201–13.
  45. 45. Roman S. Introduction to the mathematics of finance: from risk management to options pricing. Berlin: Springer; 2004.
  46. 46. Bartholomew-Biggs MC. Nonlinear optimization with financial applications. New York: Springer Science Business Media; 2006.
  47. 47. Bartholomew-Biggs MC, Kane SJ. A global optimization problem in portfolio selection. Comput Manag Sci 2007;6(3):329–45.
  48. 48. Malik M, Abubakar AB, Sulaiman IM, Mamat M, Abas SS. A new three-term conjugate gradient method for unconstrained optimization with applications in portfolio selection and robotic motion control. IAENG Int J Appl Math. 2021;51(3).
  49. 49. Malik M, Sulaiman IM, Abubakar AB, Ardaneswari G. A new family of hybrid three-term conjugate gradient method for unconstrained optimization with application to image restoration and portfolio selection. MATH 2023;8(1):1–28.
  50. 50. Mtagulwa P, Kaelo P, Diphpfu T, Kaisara K. Application of a globally convergent hybrid conjugate gradient method in portfolio optimization. J Appl Math Statist Inform 2024;20(1):33–52.