Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Convergence analysis of Suzuki’s generalized nonexpansive mappings using the Picard–Abbas iteration process

Abstract

This manuscript investigates the convergence behavior of Suzuki’s generalized nonexpansive mappings using the recently introduced Picard–Abbas iteration process. We establish both weak and strong convergence results for the associated fixed-point approximations. To demonstrate the effectiveness of our approach, a numerical example is provided. Furthermore, we generate polynomiographs based on the proposed iteration process and compare them with those produced by existing methods, highlighting the advantages and visual insights offered by our scheme.

1 Introduction

Fixed point theory is a versatile and powerful mathematical tool that plays a crucial role in various scientific and engineering disciplines. It is particularly effective for addressing complex nonlinear problems, where conventional analytical methods often prove inefficient or infeasible. The theory has broad applications, including population dynamics in biology [1], market equilibrium models in economics [2], stable strategy profiles in game theory [3], chemical equilibrium analysis in chemistry [4], stability analysis in engineering [4], and algorithm development in artificial intelligence [5]. By leveraging fixed point results, researchers can obtain optimal solutions while minimizing computational costs.

Given the complexity of these applications, standard analytical techniques are often either computationally expensive or incapable of providing exact solutions. Fixed point theory offers a powerful alternative by proving the existence of solutions and furnishing constructive methods to approximate them. A fundamental result in this field is Banach’s Contraction Principle (BCP) [6], which asserts that any contraction operator on a closed subset of a Banach space has a unique fixed point. Moreover, this fixed point can be effectively approximated using the Picard iteration method. This result forms a cornerstone for establishing the existence and approximation of solutions in a wide range of applied problems.

To formally define a contraction mapping, let be a nonempty subset of a Banach space . A self-mapping is said to be a contraction mapping if there exists a constant such that for all , the following inequality holds:

(1)

When , the mapping is said to be nonexpansive. Furthermore, a point is called a fixed point of if . Throughout this paper, will denote the set of all fixed points of . The mapping is said to be quasi-nonexpansive if:

(2)

for all , and .

Over time, various generalizations of contraction mappings have been proposed. One such extension is the class of nonexpansive mappings, introduced independently by Browder [7], Gohde [8], and Kirk [9]. To establish fixed point results for nonexpansive mappings, certain structural conditions such as closedness, boundedness, and uniform convexity are typically required [10]. Suzuki made a significant advancement in this direction [11], who proposed a generalization termed condition (C), characterizing a class of mappings now referred to as Suzuki’s generalized nonexpansive mappings. A mapping is said to satisfy condition (C) if, for all , the following holds:

(3)

Suzuki demonstrated that this class of mappings forms a broader category than quasi-nonexpansive mappings but is not as general as the class of nonexpansive mappings. Specifically, while every nonexpansive mapping satisfies condition (C), the converse does not necessarily hold. The following example illustrates this distinction.

Example 1.1 ([11]). Define a mapping by

(4)

In this example, satisfies condition (C) but is not a nonexpansive mapping.

Determining the fixed points of various classes of nonlinear mappings is a mathematically challenging task. This challenge is compounded by the failure of Picard iteration to converge for nonexpansive mappings in a complete metric space and by the inapplicability of the Banach Contraction Principle to such mappings. Consequently, numerous iterative procedures have been developed to approximate fixed points of these mappings. These methods have been extensively studied in the literature, notably in the works of Mann [12], Ishikawa [13], Noor [14], Abbas and Nazir [15], Sahu et al. [16], Thakur et al. [17], and Eke and Akewe [18], among others.

Let , , and be sequences in (0,1), where . The iteration scheme introduced by Noor [14] is recognized as the first three-step iteration process. This iteration process generates the sequence {un} as follows:

(5)

Abbas and Nazir proposed a faster iteration process than the Noor iteration, known as the Abbas iteration process [15], which generates the sequence {un} as follows:

(6)

Thakur et al. [17] introduced the following iteration process for approximating the fixed point of nonexpansive mappings:

(7)

Sahu et al. [16] proposed a new three-step iteration process to approximate fixed points of nonexpansive mappings, generating the sequence {un} as follows:

(8)

Eke and Akewe proposed a four-step iteration process, called the Picard–Noor iteration, which generates the sequence {un} as follows [18]:

(9)

A recent contribution by Manbhalang and Naveen [19] introduced the Picard–Abbas iteration process and established both weak and strong convergence results for contraction mappings. The Picard–Abbas iteration process is defined as follows:

(10)

In recent years, Suzuki’s generalized nonexpansive mappings have attracted considerable attention across various mathematical disciplines, leading to significant progress in fixed-point theory (see [2023]). These mappings are particularly valuable for the development and analysis of iterative methods due to their rich structural properties and nuanced convergence behavior.

In this work, we investigate the convergence properties of Suzuki’s generalized nonexpansive mappings using the Picard–Abbas iteration process. Our study not only extends existing results but also offers a comparative perspective by analyzing the performance of several established iteration schemes, including those by Noor, Abbas, Thakur, Sahu, and the Picard–Noor iteration processes. To complement our theoretical findings, we present a new numerical example and employ polynomiography—a modern digital visualization technique—to depict the convergence behavior of the various iteration processes. This visual approach enhances the interpretability of the results and facilitates a deeper understanding of their dynamics.

The structure of this paper is as follows. Sect 2 introduces key definitions and fundamental lemmas. In Sect 3, we establish fixed-point results for the proposed iteration process. In Sect 4, we provide a numerical example to demonstrate the effectiveness of the scheme. Sect 5 illustrates the iteration process using visualizations generated through polynomiography. Sect 6 concludes the paper with final remarks.

2 Preliminaries

The following basic results are key to proving our main result.

Definition 2.1 ([10]). A Banach space is said to be a uniformly convex Banach space (UCBS) if, for all , there exists such that

(11)

Definition 2.2. Let be a convex and closed subset of a Banach space , and let {un} be a bounded sequence in . For any , the asymptotic radius of the sequence {un} with respect to is defined by

and the asymptotic center of {un} with respect to as

Definition 2.3 ([24]). A Banach space is said to have Opial’s property if, for every sequence {un} in that converges weakly to some (i.e., ), the following inequality holds:

for all with .

Proposition 2.4 ([11]). Let be a subset of a Banach space and let be a mapping:

  1. (a) If is nonexpansive, then satisfies condition (C).
  2. (b) Any mapping that satisfies condition (C) and has a fixed point is quasi-nonexpansive.
  3. (c) If fulfills condition (C), then

Lemma 2.5 ([11]). Let be a subset of a Banach space equipped with Opial’s property. Let be a mapping satisfying condition (C). If {un} converges weakly to k and , then .

The concept of condition (I), originally introduced by Senter and Dotson [25], serves as an alternative approach for demonstrating the strong convergence of certain iterative processes in non-compact domains.

Definition 2.6. Let be a subset of a Banach space and let be a self-mapping defined on . The mapping is said to satisfy condition (I) if there exists a non-decreasing function with and g(u) > 0 for all u > 0, such that

where .

Lemma 2.7 ([11]). Let be a weakly compact convex subset of a UCBS , and let be a self-map on . Assume that satisfies condition (C), then has a fixed point.

Lemma 2.8([26]). Suppose is a UCBS and for all , we have . Suppose {un} and are two sequences in satisfying , and holds for some . Then, .

3 Main Results

This section presents convergence results for mappings that satisfy condition (C), utilizing the Picard–Abbas iteration process.

Lemma 3.1. Let be a closed and convex subset of a UCBS . Suppose that is a mapping satisfying condition (C) with . Let {un} be the sequence generated by the Picard–Abbas iteration process (10). Then, for any , the sequence {un} satisfies

Proof: Let and . Since satisfies condition (C), by Proposition 2.4(b), we have that is quasi non-expansive mapping, i.e.,

Using (10), we get

(12)

And,

Using (12), we have

(13)

Also

Using (12) and (13), we obtain

(14)

Similarly,

By using (14), we get

(15)

It follows from (12)–(15) that

Hence, the sequence is both bounded and non-increasing. Thus, we can conclude that exists for each . □

Next, we discuss the existence of a fixed point for mappings satisfying condition (C).

Theorem 3.2. Let be a UCBS, and let be a nonempty, closed and convex subset. Suppose that is a mapping satisfying condition (C), and let {un} be the sequence generated by the Picard–Abbas iteration process (10). Then, is nonempty if and only if the sequence {un} is bounded and .

Proof: Suppose that and let . By Lemma 3.1, we conclude that the sequence un is bounded and that the limit exists and it is finite. Define

(16)

From Lemma 3.1, we get

Thus,

(17)

As satisfies condition (C), and by Preposition 2.4(b), we get

Thus,

(18)

By Lemma 3.1, we get

(19)

Using (19) and (17), we have

(20)

From (17) and (20), we obtain that

(21)

From Lemma 3.1, one has

so

(22)

Using (21) and (22), we get

(23)

Using Lemma 2.8 with (16), (18) and (23), we have

Conversely, suppose that {un} is bounded and . Let , by Proposition 2.4(c), we obtain

which implies that . As is UCBS, therefore is a singleton, which means . Hence, . □

Next, we prove weak convergence using Opial’s property.

Theorem 3.3. Let be a closed and convex subset of a UCBS . Let be a mapping satisfying condition (C) with . Suppose that the space satisfies Opial’s condition. If {un} is the sequence generated by the Picard–Abbas iteration process (10), then {un} converges weakly to a fixed point of .

Proof: By Theorem 3.2, the sequence {un} is bounded and satisfies . Since is a uniformly convex Banach space, it is reflexive. Thus, there exists a subsequence of {un} that converges weakly to some . By Lemma 2.5, it follows that .

To show that {un} converges weakly to x1, assume, for the sake of contradiction, that it does not. Then there exists another subsequence of {un} such that converges weakly to with . Again, by Lemma 2.5, we have .

Now, by applying Opial’s condition together with Lemma 3.1, we obtain

This contradicts our supposition, so . Thus, {un} converges weakly to a point in . □

Now, we use the concept of compactness to prove strong convergence.

Theorem 3.4. Let be a mapping satisfying condition (C) defined on a nonempty, closed, and compact subset of a uniformly convex Banach space . Let {un} be a sequence generated by (10). Then, {un} converges strongly to a fixed point of .

Proof: By using Lemma 2.7, we obtain . Since , it follows from Lemma 3.2 that . As is given to be compact and closed, there exists a subsequence of {un} in such that it converges strongly to some , i.e., . Hence, using these facts together with Proposition 2.4(c), we obtain

(24)

Letting , we obtain . This implies which means that . Moreover, Lemma 3.1 implies that the limit exists. Hence, k is the strong limit of the sequence {un}. □

The following theorem proves strong convergence without requiring compactness.

Theorem 3.5. Let be a UCBS, and let be a nonempty, closed, and convex subset of . Suppose that is a mapping satisfying condition (C), and let {un} be a sequence generated by (10). Then, {un} converges to a point in if and only if

Proof: Suppose that the sequence {un} converges to some . Then, by the definition of convergence,

it follows that

Conversely, assume that . From Lemma 3.1, the limit exists, which gives

and this provides

(25)

Therefore constitutes a decreasing sequence that is bounded below by zero, so it may be obtained that exists. Since , so . We now show that {un} is a Cauchy sequence in .

Since , for any , there exists an integer such that for all ,

Especially,

Thus, we can choose some such that

(26)

For any , applying the triangle inequality, we obtain

Since both terms on the right-hand side are bounded by , it follows that

which implies that {un} is a Cauchy sequence in .

Since is a closed subset of the Banach space , then {un} is convereges in . Consider for any . Applying , one obtains

Thus, , hence . □

We now use condition (I) to prove the strong convergence of the Picard-Abbas iteration process. This condition imposes additional constraints that strengthen convergence, especially when generalized non-expansiveness alone is not enough. It ensures norm convergence by linking the iterative sequence to a fixed point and guaranteeing that the distance between successive iterates gradually decreases.

Theorem 3.6. Let be a closed and convex subset of UCBS . Suppose that is a mapping satisfying condition (I), and let {un} be a sequence generated by (10). Then, the sequence {un} converges strongly to a fixed point of .

Proof: From (25), one can get exist and by Theorem 3.2, we obtain

(27)

From condition (I) and (27), we have

Therefore, Since g is a nondecreasing function with , g(u) > 0 for each u > 0, therefore we have

Hence, all conditions of Theorem 3.5 are satisfied, therefore, {un} converges strongly to a fixed point of . □

4 Numerical example

This section introduces a novel numerical example to demonstrate the convergence properties of mappings satisfying condition (C), as analyzed through the Picard–Abbas iteration process.

Example 4.1. Define such that

(28)

First, we show that the given mapping is not a nonexpansive mapping. For and , we obtain

Now,

We can notice that

Therefore, the mapping given in (28) is not a nonexpansive mapping.

Now, we prove that given in (28) satisfies condition (C).

  1. When , then , and
    For , we must have . Thus,
    So, and implies that
    Now,
    Thus,
    which implies that satisfies condition (C).
  2. When then , and
    For , we must have . Now, we have two cases:
    1. (i) x > p. In this case, we have . Thus,
      Which implies that .
      Therefore, . So,
      which means that implies , so satisfies condition (C).
    2. (ii) p>x. In this case, we have . Thus,
      Which implies that . As , we get
      Now, let and . As is previously discussed in (i), therefore, now, working for and , we have , . Thus,
      We first suppose that and , then
      Which implies . Now, let assume that and , then
      which shows that
      Thus, , which shows that satisfies condition (C).
      Hence, it is established that is a Suzuki generalized nonexpansive mapping.

To illustrate the faster convergence of the proposed Picard–Abbas iteration process (10), we compare it against the Noor, Abbas, Thakur, Sahu, and Picard–Noor iteration methods. The selected parameters are , , and , with the stopping criterion defined as and the initial point u0 = 0.1. The corresponding results are presented in Fig 1 and Table 1.

thumbnail
Fig 1. Convergence behavior of Picard–Abbas, Noor, Abbas, Thakur, Sahu and Picard–Noor iteration processes corresponding to Table 1.

https://doi.org/10.1371/journal.pone.0334440.g001

thumbnail
Table 1. Iterates produced by various iteration processes for the mapping given in (28) and the starting point = 0.1.

https://doi.org/10.1371/journal.pone.0334440.t001

The findings indicate that, after the first iteration, the value obtained by the Picard–Abbas iteration process (0.98716480) is the closest to the fixed point 1 among all compared methods. As shown in Table 1, each iteration method converges at a different rate. The proposed Picard–Abbas iteration is the fastest, reaching the fixed point in 5 iterations. The Sahu and Picard–Noor iteration methods require 6 iterations to converge. The Thakur and Abbas processes exhibit similar convergence behavior. In contrast, the Noor iteration method shows the slowest convergence, taking 15 iterations to reach the fixed point.

5 Comparison via polynomiography

Mathematician and computer scientist Bahman Kalantari introduced polynomiography, a digital art form and visual analytic technique for exploring root-finding problems [27,28]. Although related concepts, such as basins of attraction, dynamical planes, and speed of convergence, had appeared earlier in the literature, Kalantari was the first to consolidate these ideas under a unified framework. He defined polynomiography as the art and science of visualizing the approximation of the zeros of complex polynomials through iterative functions, referring to the resulting images as polynomiographs. Various types of iteration processes have since been compared and analyzed using polynomiographic techniques (see [2933]).

The general procedure for generating polynomiographs is outlined in Algorithm 1. Color assignment within this algorithm can follow various approaches; in this study, we adopt a method that integrates basins of attraction with convergence speed [34]. Each root of the polynomial is assigned a distinct non-black color, while points that do not converge are marked in black. For each initial point u0 in the region A, the iterative method I is applied for up to K iterations. If convergence occurs in fewer than K steps, we determine the root closest to the resulting point un and assign its corresponding color to u0. The brightness of the color reflects the speed of convergence: lighter shades indicate faster convergence, while darker shades represent slower convergence. If no convergence is achieved within K iterations, u0 is colored black. This scheme effectively visualizes both the destination of convergence (via color) and the convergence rate (via shading), providing intuitive insights into the behavior of the iterative process.

Algorithm 1. Creation of a polynomiograpgh.

One well-known root-finding algorithm is the Newton’s iteration method, also known as the Newton–Raphson method. Its definition is:

(29)

where is the starting point and is a polynomial with complex coefficients. We can write (29) in terms of a fixed point iteration process as follows:

(30)

where . Thus, this is the Picard iteration. If the iteration process (30) converges to any fixed point of , then one has

(31)

Thus, , which means that x is a root of . Finding the fixed points of is therefore equal to solving the problem of finding the roots of . This enables us to use various fixed point iteration processes for , such as the suggested Picard–Abbas iteration.

In the considered example, we use three sets of iterations’ parameters

  • , , ;
  • , , ;
  • , , .

For each of the three sets of iteration parameters, we generated polynomiographs of the polynomial , which has six roots: –1.0, –0.5–0.866025i, i, 0.5–0.866025i, i, and 1.0. The iteration schemes used include Picard–Abbas, Sahu, Abbas, Thakur, Noor, and Picard–Noor methods. The parameters for polynomiograph generation were: region A = [−2,2]2, maximum number of iterations K = 45, and . Additionally, for each polynomiograph, we computed the Average Number of Iterations (ANI) as proposed in [35].

The polynomiographs generated for the first set of parameter values are shown in Fig 2. Distinct convergence patterns are observed for the Picard–Abbas, Sahu, Abbas, Thakur, Noor, and Picard–Noor iteration processes. Visual inspection indicates that the proposed Picard–Abbas iteration exhibits the fastest convergence, followed by Abbas, Picard–Noor, Sahu, Thakur, and Noor. Notably, for the Noor iteration, no points within the considered region converged to any root, resulting in a completely black polynomiograph. ANI values in Table 2 corroborate these findings: Picard–Abbas (4.156), Abbas (6.110), Picard–Noor (8.392), Sahu (8.609), and Thakur (10.786).

thumbnail
Fig 2. Comparison of polynomiographs obtained from different iteration processes with parameters = 0.06, = 0.06, = 0.06.

https://doi.org/10.1371/journal.pone.0334440.g002

thumbnail
Fig 3. Comparison of polynomiographs obtained from different iteration processes with parameters = 0.6, = 0.6, = 0.6.

https://doi.org/10.1371/journal.pone.0334440.g003

thumbnail
Fig 4. Comparison of polynomiographs obtained from different iteration processes with parameters = 0.9, = 0.9, and = 0.9.

https://doi.org/10.1371/journal.pone.0334440.g004

The polynomiographs for the parameter settings , , and are shown in Fig 3. The results show that the Noor iteration exhibits the slowest convergence speed, with the highest ANI value of 12.068. Among the iterations studied, the Picard–Abbas method achieves the fastest convergence, yielding the lowest ANI value of 3.746. In terms of convergence speed, the Picard–Noor iteration ranks second with an ANI of 4.778, followed by the Sahu (5.216), Abbas (5.390), and Thakur (6.078) iterations.

The third configuration employs high values for the iteration parameters. Similar to the previous cases, the Noor iteration exhibits the slowest convergence, as shown in Fig 4. In contrast, the Picard–Abbas iteration once again achieves the fastest convergence. Interestingly, the high-parameter setting leads to faster convergence across all methods, requiring fewer iterations to reach the polynomial’s roots. The ANI values corresponding to this configuration are presented in Table 2. The Picard–Noor iteration yields the lowest ANI value of 3.720, followed closely by the Picard–Abbas iteration with an ANI of 3.817.

6 Conclusion

Our analysis of Suzuki mappings using the Picard–Abbas iteration process demonstrates its enhanced convergence performance. The numerical results in Table 1 confirm its efficiency relative to several established methods, including those of Noor, Sahu, Thakur, Abbas, and Picard–Noor. Furthermore, visualizations generated through polynomiography provide additional insight into the convergence behavior, highlighting the iteration’s faster convergence rate. Collectively, these findings suggest that the Picard–Abbas process is a robust and effective tool for solving fixed-point problems, with promising potential for broader applications in mathematical and computational contexts.

References

  1. 1. Amar AB, Jeribi A, Mnif M. Some fixed point theorems and application to biological model. Numerical Functional Analysis and Optimization. 2008;29(1–2):1–23.
  2. 2. Danet RM, Popescu MV. Some applications of the fixed-point theory in economics. Creat Math Inform. 2008;17(3):392–8.
  3. 3. Yuan A. Fixed point theorems and applications to game theory. University of Chicago; 2017.
  4. 4. Constanda C, Ahues M, Largillier A. Integral methods in science and engineering: analytic and numerical techniques. Boston: Birkhauser; 2011.
  5. 5. Bertsekas DP. Nonlinear programming. 3rd ed. Belmont: Athena Scientific; 2016.
  6. 6. Caccioppoli R. Un teorema generale sull’esistenza di elementi uniti in una transformazione funzionale. Rendiconti dell’Academia Nazionale dei Lincei. 1930;11:794–9.
  7. 7. Browder FE. Nonexpansive nonlinear operators in a Banach space. Proc Natl Acad Sci U S A. 1965;54(4):1041–4. pmid:16578619
  8. 8. Göhde D. Zum Prinzip der kontraktiven Abbildung. Mathematische Nachrichten. 1965;30(3–4):251–8.
  9. 9. Kirk WA. A fixed point theorem for mappings which do not increase distances. The American Mathematical Monthly. 1965;72(9):1004.
  10. 10. Goebel K, Kirk WA. Topics in metric fixed point theory. Cambridge: Cambridge University Press; 1990.
  11. 11. Suzuki T. Fixed point theorems and convergence theorems for some generalized nonexpansive mappings. Journal of Mathematical Analysis and Applications. 2008;340(2):1088–95.
  12. 12. Mann WR. Mean value methods in iteration. Proc Amer Math Soc. 1953;4(3):506–10.
  13. 13. Ishikawa S. Fixed points by a new iteration method. Proc Amer Math Soc. 1974;44(1):147–50.
  14. 14. Noor MA. New approximation schemes for general variational inequalities. Journal of Mathematical Analysis and Applications. 2000;251(1):217–29.
  15. 15. Abbas M, Nazir T. A new faster iteration process applied to constrained minimization and feasibility problems. Mat Vesn. 2014;66(2):223–34.
  16. 16. Sahu VK, Pathak HK, Tiwari R. Convergence theorems for new iteration scheme and comparison results. Aligarh Bull Math. 2016;35(1–2):19–42.
  17. 17. Thakur D, Thakur BS, Postolache M. New iteration scheme for numerical reckoning fixed points of nonexpansive mappings. J Inequal Appl. 2014;2014(1).
  18. 18. Stella Eke K, Akewe H. Equivalence of Picard-type hybrid iterative algorithms for contractive mappings. Asian J of Scientific Research. 2019;12(3):298–307.
  19. 19. Chyne M, Kumar N. Convergence analysis of Picard-Abbas hybrid iterative process. Adv Fixed Point Theory. 2024.
  20. 20. Abkar A, Eslamian M. Fixed point theorems for Suzuki generalized nonexpansive multivalued mappings in Banach spaces. Fixed Point Theory Appl. 2010;2010(1).
  21. 21. Ahmad J, Ullah K, Arshad M, Ma Z. A new iterative method for Suzuki mappings in Banach spaces. Journal of Mathematics. 2021;2021:1–7.
  22. 22. Ali J, Ali F, Kumar P. Approximation of fixed points for Suzuki’s generalized non-expansive mappings. Mathematics. 2019;7(6):522.
  23. 23. Zuo Z, Cui Y. Iterative approximations for generalized multivalued mappings in Banach spaces. Thai J Math. 2011;9(2):333–42.
  24. 24. Opial Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull Amer Math Soc. 1967;73(4):591–7.
  25. 25. Senter HF, Dotson WG. Approximating fixed points of nonexpansive mappings. Proc Amer Math Soc. 1974;44(2):375–80.
  26. 26. Schu J. Weak and strong convergence to fixed points of asymptotically nonexpansive mappings. Bull Austral Math Soc. 1991;43(1):153–9.
  27. 27. Kalantari B. Polynomiography: from the fundamental theorem of algebra to art. Leonardo. 2005;38(3):233–8.
  28. 28. Kalantari B. Polynomial root-finding and polynomiography. Singapore: World Scientific; 2009.
  29. 29. Nawaz B, Ullah K, Gdawiec K. Convergence analysis of a Picard–CR iteration process for nonexpansive mappings. Soft Comput. 2025;29(2):435–55.
  30. 30. Nawaz B, Ullah K, Gdawiec K. Convergence analysis of Picard–SP iteration process for generalized α–nonexpansive mappings. Numer Algor. 2024;98(4):1943–64.
  31. 31. Panigrahy K, Mishra D. A note on a faster fixed point iterative method. J Anal. 2022;31(1):831–54.
  32. 32. Yu T-M, Shahid AA, Shabbir K, Shah NA, Li Y-M. An iteration process for a general class of contractive-like operators: Convergence, stability and polynomiography. AIMS Mathematics. 2021;6(7):6699–714.
  33. 33. Usurelu GI, Postolache M. Algorithm for generalized hybrid operators with numerical analysis and applications. J Nonlinear Var Anal. 2022;6(3):255–77.
  34. 34. Andreev F, Kalantari B, Kalantari I. Measuring the average performance of root-finding algorithms and imaging it through polynomiography. In: Proceedings of 17th IMACS World Congress, Scientific Computation, Applied Mathematics and Simulation, Paris, 2005.
  35. 35. Naseem A, Argyros IK, Qureshi S, Aziz ur Rehman M, Soomro A, Gdawiec K, et al. Memory based approaches to one-dimensional nonlinear models. Acta Appl Math. 2024;195(1):1.