Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Creating a novel algorithm for studying the strong convergence to a sequence with applications

  • Hasanen A. Hammad ,

    Contributed equally to this work with: Hasanen A. Hammad, Mohammed E. Dafaalla, Manal Elzain Mohamed Abdalla

    Roles Conceptualization

    h.abdelwareth@qu.edu.sa

    Affiliation Department of Mathematics, College of Sciences, Qassim University, Buraydah, Saudi Arabia

  • Mohammed E. Dafaalla ,

    Contributed equally to this work with: Hasanen A. Hammad, Mohammed E. Dafaalla, Manal Elzain Mohamed Abdalla

    Roles Conceptualization, Methodology

    Affiliation Department of Mathematics, College of Sciences, Qassim University, Buraydah, Saudi Arabia

  • Manal Elzain Mohamed Abdalla

    Contributed equally to this work with: Hasanen A. Hammad, Mohammed E. Dafaalla, Manal Elzain Mohamed Abdalla

    Roles Formal analysis, Funding acquisition

    Affiliation Department of Mathematics, College of Science and Arts, King Khalid University, Mahayil, Saudi Arabia

Expression of Concern

The PLOS One Editors issue this Expression of Concern because this article [1] was identified as one of a series of submissions for which we have concerns about the peer review process. Readers are advised to interpret the article [1] with caution.

19 Feb 2026: The PLOS One Editors (2026) Expression of Concern: Creating a novel algorithm for studying the strong convergence to a sequence with applications. PLOS ONE 21(2): e0343283. https://doi.org/10.1371/journal.pone.0343283 View expression of concern

Abstract

This manuscript introduces a novel algorithm tailored to investigate the strong convergence of sequences under specific conditions. Notably, the framework incorporates a finite family of generalized demimetric operators within real Hilbert spaces, broadening existing operator theory. The proposed algorithm efficiently establishes strong convergence and demonstrates its versatility through successful applications, including proving the existence of solutions to split minimization and feasibility problems, thereby showcasing its potential in optimization and numerical analysis.

1 Introduction

Determining zero points of maximal monotone operators (MMOs) is a fundamental aspect of optimization, underpinning solutions to various practical problems across disciplines and driving theoretical advancements. Notably, this concept facilitates efficient reformulations of prominent mathematical challenges, including split feasibility, variational inequalities, and convex minimization problems. See [18] for more details.

This is one way to formulate the problem mathematically:

(1)

where is a MMO described on a Hilbert space (HS)  .  The symbol represents the set of solutions of the problem (1). This mathematical framework has broad applications in a number of domains, such as machine learning, signal processing, and medical imaging. For more information, see [911]. For example, zero points of MMOs can help reconstruct images from partial data in medical imaging. These operators are essential for creating optimization problems in machine learning that incorporate intricate models and huge data sets.

The study of solutions for (1) originated with Martinet’s pioneering work [12] in 1970, introducing the proximal point algorithm. Rockafellar [13] subsequently enhanced and extended this method, spawning a substantial body of literature that significantly advanced the field.

The proximal point approach has now undergone numerous modifications, each designed for a particular HS Ω. These improvements have been examined by numerous writers, who have presented a broad range of tactics meant to increase convergence rates and generalizable in various contexts [1418].

The study of fixed points (FPs) for nonlinear mappings is a vibrant area within nonlinear analysis, driven by its diverse applications in fields like fuzzy logic, game theory, and inverse problems [1922]. The estimation of common solutions for the zero points of MMOs and the FPs of nonlinear mappings has been the subject of a substantial amount of research. This area’s potential to address mathematical models with constraints, formulated as FP or zero-point problems [2333], fuels ongoing investigation. The interplay between monotone operators and FP theory presents a fertile ground for further exploration.

Recent advancements in FP theory include Sahu et al.’s pioneering work [34], introducing the S-iteration technique for identifying common FPs of non-self quasi-nonexpansive mappings in uniformly convex Banach spaces. Their innovative approach, validated through numerical experiments, showcased tangible real-world applications. Building upon this success, Sahu et al. [35] further expanded the scope of FP methods by developing a variable anchoring iterative algorithm for solving problems on Hadamard manifolds, underscoring the pivotal role of geometric contexts in their analysis and demonstrating the versatility of FP theory in tackling complex mathematical problems.

2 Preliminaries

In this section, we recall a number of key terms and terminology that will be used in the sequel. Here, we assume that is a real HS, M is a non-empty, convex and closed (NCC) subset of  ,  fix(Q) is the set of all FPs of the mapping Q ,   → ,  and  ⇀  is the strong and weak convergence of the sequence to μ ,  respectively.

Definition 2.1. [36] A multi-valued mapping is called

  1. (i) Monotone, if  ⟨ ω − υ , μ − 𝜐 ⟩ ≥ 0 ,  provided that ω ∈ Q ( μ )  and υ ∈ Q ( 𝜐 ) ; 
  2. (ii) Maximal monotone (MM), if the graph of Q not properly contained in the graph of any other monotone mapping and the mapping Q is monotone, where

The single-valued mapping related to the MM mapping Q is provided by and it is known as the resolvent mapping of Q. The MM mapping Q has a mapping , that is, firmly nonexpansive for each φ > 0 and fix For more details; see [37].

Definition 2.2. [38] Let M be an NCC subset of  .  We say that the mapping is a metric projection mapping if there is a unique point in order that

Also, for all μ ∈  ,  ν ∈ M ,  we have

Definition 2.3. [39] A mapping Q :  →  with is called

(a) ς − DMM, where ς is a real number with if

for all μ ∈  and

(b) ϕ-generalized DMM, where ϕ ∈  − { 0 }  if

for all μ ∈  and

Remark 2.4. [40] Based on the above definition, the ϕ-generalized DMM is DMM, for ϕ > 0 . 

Lemma 2.5. [37] The following relations hold : 

  1. (i)
  2. (ii)
  3. (iii)
  4. (iv)

for and with

Lemma 2.6. [38] Let the mapping Q be a nonexpansive on . If a sequence and then that is, Q is demiclosed on  . 

Lemma 2.7. [41] Assume that Q : M →  is a ϕ − DMM in order that fix ( Q ) ≠  and ϕ ∈ ( −  , 1 ) .  Also, let b ∈ ( 0 ,  )  and define V = ( 1 − b ) I + bQ .  Then, the following axioms are true:

  1. (1) if b ≠ 0 ,  fix ( Q ) = fix ( V ) , 
  2. (2) fix(Q) is a closed convex subset of  , 
  3. (3) for b ∈ ( 0 , 1 − ϕ ) ,  the mapping V is a quasi-nonexpansive.

Lemma 2.8. [42] The fix(Q) is closed and convex, provided that Q is a ϕ-generalized DMM on  ,  where ϕ ∈  − { 0 } . 

Lemma 2.9. [43] Assume that is a sequence of non-negative real numbers. Let

where such that and then

Lemma 2.10. [41] Assume that M is a nonempty convex subset of the HS  ,  and is an infinite family of DMMs with and Let be a positive sequence such that Then, is a b–DMM with and

3 Strong convergence results

This section contains the definition of our algorithm and proof of the suggested algorithm’s strong convergence. In this part, we assume that is a finite class of -generalized DMMs where and is demiclosed at the origin for all j = 1 , 2 , ⋯ , u .  Further, g :  →  is a -contraction mapping with and is a finite class of multi-valued monotone mappings for every k = 1 , 2 , ⋯ , P . 

Algorithms for DMM play a crucial role in approximating FPs, offering efficient and robust methods for solving complex optimization problems. By leveraging DMMs, these algorithms ensure convergence to fixed points in various mathematical structures, including HSs. Their significance extends to numerous applications, such as variational inequalities, split feasibility problems, and minimization problems, facilitating solutions in fields like machine learning, data analysis, and control systems. So, our algorithm here is as follows:

Algorithm 3.1. Assume that Compute and by

(2)

Lemma 3.2. Let and be a self-mapping defined by for all j = 1 , 2 , ⋯ , u with and Then, is -generalized DMM if is -generalized DMM for all j = 1 , 2 , ⋯ , u . 

Proof. Let j = 1 , 2 , ⋯ , u and by the defintion of -generalized DMM, we have

(3)

for all μ ∈  .  Since

Then,

(4)

From (4) in (3), we get

Thus, one can write

This proves that is -generalized demimetric. □

Lemma 3.3. Assume that is a bounded sequence in and Define where refer to the resolvent mapping of for all k = 1 , 2 , ⋯ , P .  If

Then,

Proof. Since is bounded, then there exists a subsequence of such that and

(5)

By Lemma 3.2, is an -generalized DMM, provided that is -generalized DMM for all with for each j = 1 , 2 , ⋯ , u .  Utilizing Remark 2.4, we have is -demimetric. Thanks to Lemma 2.10, is DMM and so nonexpansive. Furthermore, because as b →  and then, by Lemmas 2.6, 2.7, and 2.10, one can obtain

Next, we show that According to given assumption, we get

By induction, if k = 1, we have or we can write

Hence, one has

Analogously, if k = 2, we have Now, we assume that

(6)

and

(7)

Taking u →  in (7), we have

(8)

From (8) in (6) and taking u →  ,  we can write

Hence,

Following the same approach, we have

which yields, Thus, for all k = 1 , 2 , ⋯ , P .  This proves that Therefore, So, by the equality (5) and metric projection properties, we have the result. □

Theorem 3.4. Let the solution set and be a sequence given by Algorithm 3.1 for any where b ∈ ( 0 , ρ )  with ρ ∈ ( 0 , 1 ) ,  ϱ > 0 ,  and under the following hypotheses:

  1. (h1)
  2. (h2) for a positive term sequence ,

Then

Proof. Consider and . The algorithm 3.1 can be expressed as

(9)

As is firmly nonexpansive, then, it is nonexpansive for all k = 1 , 2 , ⋯ , P .  Thus, is closed and convex for all k = 1 , 2 , ⋯ , P .  Moreover, is DMM, so by Lemma 2.8, we get, for all k = 1 , 2 , ⋯ , P ,  is closed and convex. This implies that Θ is NCC. Thus, is well-defined. From the definition of and the hypotheses we conclude that (Remark 3.1 [43]).

We claim that is bounded. Assume that and we have

(10)(11)(12)

and

(13)

Since then there exists N > 0 such that Utilizing the inequalities (10)-(13), we can write

which yields, the sequence is bounded and hence the sequences are also bounded.

Now,

(14)(15)

By Lemma 2.5, we can write

hence,

(16)

Also,

(17)

From the inequalities (14)–(17) and Lemma 2.5, we get

(18)

which implies that

(19)

Inequality (19) can be simplified as

(20)

where

The rest of the proof will be divided into the following two cases:

Case 1: For some , where , assume that is a nonincreasing sequence. Then the limit exists because the sequence is monotonic and bounded. Since and using (18), we get

which yields

(21)

Letting u →  and using the conditions and we obtain that

(22)

Further,

(23)

and

(24)

Consider

Since, is firmly nonexpansive, then

which implies that

Following the same approach, we get

(25)

It follows from (16), (17), (18) and (25) that

which yields,

(26)

In (26), taking u →  and using the conditions and we obtain that

Hence,

(27)

Also, we have

(28)

From (27) in (28) after taking the limit as u →  ,  we have

(29)

It follows from (24) and (29) that

(30)

From the definition of and since we conclude that

(31)

Based on (23), (24), (29), (31), we can write

From the inequalities (22), (24), (27), (30), and Lemma 3.3, we have

(32)

Using (20) and (32) and Lemma 2.9, we obtain that

Case 2: Assume that for any , the sequence is not a monotonic decreasing. Then, there is a sequence of  { u }  in order that for each b ∈  .  Inequality (21) implies that

Using the conditions and we have

So,

and

Utilizing (26), we have

Letting l →  in the above inequality, we get

Analogously as in Case 1, one can write

and

Using the same method as in Case 1, we conclude that

(33)

Analogue to the inequality (20), we have

(34)

hence

Since we get Therefore,

(35)

It follows from (33) and (35) that By inequalities (33) and (34), we have Moreover, since for all l ∈ , we get This completes the proof.

4 Applications

No one can deny the importance of algorithms in scientific applications, especially in the field of optimization. Therefore, in this section we apply the main results to find the solution of the split minimization problem (SMP) and split feasibility problems (SFP) as applications.

4.1 Application to the SMP

In this part, we proposed the following SMP:

(36)

where the function f :  → ( −  ,  ]  is a convex, proper and lower semicontinuous (CPLS, for short).

The subdifferential of a function is defined as

Definition 4.1. [44] Assume that f :  → ( −  ,  ]  is a CPLS function. The subdifferential ∂f of f is defined by

Minty [45] proved that the subdifferential ∂f :  →  is MMO on  ,  while Le [46] showed that the problem (36) is equivalent to the problem

Based on the above facts and using Theorem 3.4, we can introduce the following theorem:

Theorem 4.2. Let be a finite class of -generalized DMMs where and is demiclosed at the origin for all j = 1 , 2 , ⋯ , u and be a CPLS functions for all j = 1 , 2 , ⋯ , P .  Let g be a -contraction mapping with and For any assume that the sequence is described by

where refers to the solution of the problem (36), b ∈ ( 0 , λ )  with λ ∈ ( 0 , 1 ) .  Further , and ρ ∈ ( 0 , 1 )  under the following assumptions:

  1. (i)
  2. (ii) for a positive term sequence ,

Then,

Proof. The result is obtained immediately from Theorem 3.4 by applying the knowledge that ∂f :  →  is a MMO on and entering for all k = 1 , 2 , ⋯ , P .  □

4.2 Application to the SFP

We discuss the following SFP in this part, which was created by Elfving and Censor [47]:

(37)

where M is NCC of and E :  →  is a bounded linear operator. The inductor of the set M is described as

The formula for a normal cone of M at ϱ ∈  is given by

Obviously, is CPLS function and is MMO. Moreover,

where is the resolvent operator associated with and φ > 0 .  Further,

According the above illustrations and using Theorem 3.4, we present the following theorem:

Theorem 4.3. Let be a finite class of -generalized DMMs where and is demiclosed at the origin for all j = 1 , 2 , ⋯ , u. Assume that is a bounded linear operator for all j = 1 , 2 , ⋯ , P .  Let g be a -contraction mapping with and For any assume that the sequence is given by

where refers to the solution of the problem (37), b ∈ ( 0 , λ )  with λ ∈ ( 0 , 1 ) .  Further, , and ρ ∈ ( 0 , 1 )  under the following assumptions:

  1. (i)
  2. (ii) for a positive term sequence ,

Then,

Proof. The result is obtained immediately from Theorem 3.4 by applying the knowledge that is MMO on and entering for all k = 1 , 2 , ⋯ , P .  □

5 Conclusion and future work

This manuscript presents a novel algorithm for analyzing strong convergence of sequences under specific conditions, incorporating generalized demimetric operators in real HSs. The algorithm effectively resolves split minimization and feasibility problems. Our research offers a versatile tool for optimization and FP analysis, providing a unified approach to tackle complex problem. With robust convergence guarantees, this method ensures reliable results in practical applications across various fields, including applied mathematics, optimization, and engineering. Future research directions can further enhance our algorithm’s robustness and applicability. Key avenues include investigating adaptive techniques for real-time parameter adjustment, expanding the methodology to non-HSs, applying it to high-dimensional optimization problems, and conducting empirical studies in real-world applications such as data analysis, machine learning and control systems to inform future improvements.

References

  1. 1. Combettes PL, Wajs VR. Signal recovery by proximal forward-backward splitting. Multiscale Model Simul 2005;4(4):1168–200.
  2. 2. Ansari QH, Islam M, Yao JC. Nonsmooth variational inequalities on Hadamard manifolds. Appl Anal. 2020;99:340–58.
  3. 3. Sahu DR, Yao JC, Verma M, Shukla KK. Convergence rate analysis of proximal gradient methods with applications to composite minimization problems. Optimization 2020;70(1):75–100.
  4. 4. Qin X, An NT. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput Optim Appl. 2019;74:821–50.
  5. 5. Cuong TH, Yao J-C, Yen ND. Qualitative properties of the minimum sum-of-squares clustering problem. Optimization 2020;69(9):2131–54.
  6. 6. Wang Y, Zhang H. Strong convergence of the viscosity Douglas-Rachford algorithm for inclusion problems. Appl Set-Valued Anal Optim. 2020;2(2):339–49.
  7. 7. Qin X, Yao JC. A viscosity iterative method for a split feasibility problem. J Nonlinear Convex Anal. 2019;20(8):1497–506.
  8. 8. Tan B, Xu S, Li S. Inertial shrinking projection algorithms for solving hierarchical variational inequality. J Prob Nonlinear Convex Anal. 2020;21(4):871–88.
  9. 9. An NT, Dong PD, Qin X. Robust feature selection via nonconvex sparsity-based methods. J Nonlinear Var Anal. 2021;5(1):59–77.
  10. 10. Humphries T, Loreto M, Halter B, O’Keeffe W, Ramirez L. Comparison of regularized and superiorized methods for tomographic image reconstruction. J Appl Numer Optim. 2020;2(1):77–99.
  11. 11. Tian M, Xu G. Inertial modified Tseng’s extragradient algorithms for solving monotone variational inequalities and fixed point problems. J Nonlinear Funct Anal. 2020;2020:1–19.
  12. 12. Martinet B. R’egularisation d’’equations variationnelles par approximations successives. Rev Fr Infor Rech Opér. 1970;4:154–8.
  13. 13. Rockafellar RT. Monotone operators and the proximal point algorithm. SIAM J Control Optim 1976;14(5):877–98.
  14. 14. Cho SY, Qin X, Wang L. Strong convergence of a splitting algorithm for treating monotone operators. Fixed Point Theory Appl. 2014;2014:94.
  15. 15. Qin X, Cho SY, Wang L. Strong convergence of an iterative algorithm involving nonlinear mappings of nonexpansive and accretive type. Optimization 2018;67(9):1377–88.
  16. 16. Ogbuisi FU, Mewomo OT. Iterative solution of split variational inclusion problem in a real Banach spaces. Afr Math. 2017;28:295–309.
  17. 17. Suantai S, Shehu Y, Cholamjiak P. Nonlinear iterative methods for solving the split common null point problem in Banach spaces. Optim Methods Softw 2018;34(4):853–74.
  18. 18. Shehu Y. Convergence results of forward–backward algorithms for sum of monotone operators in Banach spaces. Results Math 2019;74(4):158.
  19. 19. Hammad HA, Cholamjiak W, Yambangwai D, Dutta H. A modified shrinking projection methods for numerical reckoning fixed points of G-nonexpansive mappings in Hilbert spaces with graphs. Miskolc Math. Notes 2019;20(2):941.
  20. 20. Tuyen TM, Hammad HA. Effect of shrinking projection and CQ-methods on two inertial forward–backward algorithms for solving variational inclusion problems. Rend Circ Mat Palermo, II Ser 2021;70(3):1669–83.
  21. 21. Hammad HA, ur Rehman H, De la Sen M. Advanced algorithms and common solutions to variational inequalities. Symmetry 2020;12(7):1198.
  22. 22. Hammad H, Rehman H, De la Sen M. Shrinking projection methods for accelerating relaxed inertial Tseng-type algorithm with applications. Math Probl Eng. 2020;2020:7487383.
  23. 23. Iiduka H, Takahashi W. Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings. Nonlinear Anal 2005;61(3):341–50.
  24. 24. Tian M, Jiang B-N. Weak convergence theorem for zero points of inverse strongly monotone mapping and fixed points of nonexpansive mapping in Hilbert space. Optimization 2017;66(10):1689–98.
  25. 25. Liu L, Qin X, Agarwal RP. Iterative methods for fixed points and zero points of nonlinear mappings with applications. Optimization 2019;70(4):693–713.
  26. 26. Tuyen T, Trang N. Two new algorithms for finding a common zero of accretive operators in Banach spaces. J Nonlinear Var Anal. 2019;3:87–106.
  27. 27. Cholamjiak W, Shehu Y, Yao J-C. Prediction of breast cancer through fast optimization techniques applied to machine learning. Optimization. 2024:1–29. https://doi.org/10.1080/02331934.2024.2385646
  28. 28. Jun-On N, Cholamjiak W. Enhanced double inertial forward–-backward splitting algorithm for variational inclusion problems: applications in mathematical integrated skill prediction. Symmetry. 2024:16(8);1091.
  29. 29. Yajai W, Nabheerong P, Cholamjiak W. A double inertial Mann algorithm for equilibrium problems application to breast cancer screening. J Nonlinear Convex Anal. 2024;25(7):1697–716.
  30. 30. Rashid M, Kalsoom A, Albargi AH, Hussain A, Sundas H. Convergence result for solving the split fixed point problem with multiple output sets in nonlinear spaces. Mathematics 2024;12(12):1825.
  31. 31. Iqbal M, Ali A, Sulami H, Hussain A. Iterative stability analysis for generalized α-nonexpensive mappings with fixed points. Axioms. 2024;13(3):156.
  32. 32. Ali D, Hussain A, Karapinar E, Cholamjiak P. Efficient fixed-point iteration for generalized nonexpansive mappings and its stability in Banach spaces. Open Math 2022;20(1):1753–69.
  33. 33. Polyak BT. Some methods of speeding up the convergence of iteration methods. USSR Comput Math Math Phys 1964;4(5):1–17.
  34. 34. Sahu DR, Pitea A, Verma M. A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems. Numer Algor 2019;83(2):421–49.
  35. 35. Sahu DR, Pitea A, Sharma S, Singh AK. Applications of a variable anchoring iterative method to equation and inclusion problems on Hadamard manifolds. Commun Nonlinear Sci Numer Simul. 2024;138:108192.
  36. 36. Kazmi K, Ali R, Furkan M. Hybrid iterative method for split monotone variational inclusion problem and hierarchical fixed point problem for a finite family of nonexpansive mappings. Numer Alg. 2018;79:499–527.
  37. 37. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, vol. 408. New York: Springer, 2011.
  38. 38. Geobel K, Kirk WA. Topics in Metric Fixed Point Theory Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge: Cambridge University Press, 1990.
  39. 39. Eslamian M. Strong convergence theorem for common zero points of inverse strongly monotone mappings and common fixed points of generalized demimetric mappings. Optimization 2021;71(14):4265–87.
  40. 40. Kawasaki T, Takahashi W. A strong convergence theorem for countable families of non-linear nonself mappings in Hilbert spaces and applications. J Nonlinear Convex Anal. 2018;19:543–60.
  41. 41. Song Y. Iterative methods for fixed point problems and generalized split feasibility problems in Banach spaces. J Nonlinear Sci Appl 2018;11(2):198–217.
  42. 42. Eslamian M, Kamandi A. A novel algorithm for approximating common solution of a system of monotone inclusion problems and common fixed point problem. J Ind Manag Optim. 2023;19:868–89.
  43. 43. Tan B, Cho S. Strong convergence of inertial forward–backward methods for solving monotone inclusions. Appl Anal. 2022;101:5386–414.
  44. 44. Kamimura S, Takahashi W. Approximating solutions of maximal monotone operators in Hilbert spaces. J Approx Theory 2000;106(2):226–40.
  45. 45. Minty G. On the monotonicity of the gradient of a convex function. Pacific J Math 1964;14(1):243–7.
  46. 46. Le BK. New efficient approach in finding a zero of a maximal monotone operator. 2020.
  47. 47. Censor Y, Elfving T. A multiprojection algorithm using Bregman projections in a product space. Numer Alg. 1994;8(2):221–39.