Figures
Abstract
The Newton method is a classical method for solving systems of nonlinear equations and offers quadratic convergence. The order of convergence of the Newton method is optimal as it requires one evaluation for the system of nonlinear equations and the second for the Jacobian. Many boundary value problems in nature have quadratic non-linearity and the corresponding system of nonlinear equations associated with their discrete formulation has constant 2nd-order Fréchet derivatives. We try to get benefit from this information and develop a single-point iterative method to solve such a system of nonlinear equations with quadratic nonlinearity. In our proposed single-point iterative method, we perform one evaluation of a system of nonlinear equations and another for Jacobian. In total, there are two functional evaluations, and we do not count the evaluation of the 2nd-order Fréchet derivative as it is constant in all the iterations of the method. The convergence order (CO) of our proposed method is four. The efficiency index of our method is 41/2 = 2 which is higher than that of the Newton method 21/2 = 1.4142. To quantify the functionality of our proposed algorithm, we have performed extensive numerical testing on a collection of test problems with quadratic nonlinearity.
Citation: Kouser S, Ur Rehman S, Elmasry Y, Azeem Khan W, Ahmad F, Khan H (2025) Towards efficient solutions: A novel approach to quadratic nonlinearity in boundary value problems. PLoS One 20(5): e0317752. https://doi.org/10.1371/journal.pone.0317752
Editor: B. Omkar Lakshmi Jagan, Vignan’s Institute of Information Technology, INDIA
Received: May 3, 2024; Accepted: January 5, 2025; Published: May 23, 2025
Copyright: © 2025 Kouser et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Maple and Matlab codes of our proposed method are provided in the appendix within the manuscript.
Funding: This study was funded by the Deanship of Research and Graduate Studies at King Khalid University through Large Research Project: RGP.2/588/45’ (Awarded to YE).
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
The numerical solution of a nonlinear equation in its closed form is not always possible, and hence numerical iterative schemes provide us with an alternative way to find them. The exact method, for instance, the quadratic formula to obtain the roots of a quadratic equation, is an example of the direct method. However, when dealing with a system of nonlinear equations (SNLEs), the situation becomes more complicated in terms of determining the exact empirical formulae. The practical way to find the numerical solution of such SNLEs is through numerical iterative methods. In many ways, these numerical methods differ from exact numerical methods or exact symbolic methods. The iterative methods always require an initial guess to start the iterative procedure, but on the other hand, the exact methods do not require any initial guess. The total number of binary operations can be counted for the exact method, while in the case of iterative methods, we can have an estimate of the computational cost. The iterative methods need some stopping criteria.
The symbolic exact methods may suffer from numerical instabilities when we substitute numerical values in place of symbols. The stable numerical algorithms avoid numerical instabilities.
The bisection method, the regula-falsi method, the Newton method [1], and the secant method are all methods for the numerical approximation of nonlinear equations. The bisection and false-position methods are closed-form methods because they require the knowledge of an interval containing the root. On the other hand, the Newton and Secant methods are open methods because we need a single initial guess to start the iterative process. The generalisation of bisection and regula-falsi for the SNLEs is a bit hard and may be possible in some sense. The open methods, for instance, the Newton and Secant methods, are excellent candidates for multidimensional generalization.
The root(s) of a nonlinear equation or SNLEs can be classified into two categories, namely, simple roots and roots with multiplicity. Let be a multi-dimensional vector and
, where
represents the SNLEs. We also assume that
is sufficiently smooth, that is, that all of its Fréchet derivatives exit up to some suitable order. If the limit [2]
exits, then the first order Fréchet derivative . Whereas, the higher-order derivatives are computed by following the recursion
where and
are not the functions of
and
It is worth mentioning that the Fréchet differentiable function could be linearized in the neighborhood of a give point if
exists. From the linearization of
, we can derive the Newton method. Before proceeding, we define the various types of roots. A vector
is called the root of
if
The root is simple if
that is, if the Jacobian of at
is non-singular. Otherwise, the root
is not a simple root. In this study, we assume that the root
is a simple root in all cases. Let
be the nth approximation of
, and we want to get the new approximation because it may reduce the norm of SNLEs. If
is the approximation of
, then
and we are looking for , such that,
where is a correction vector. As discussed earlier, the Fréchet differentiability helps in the linear approximation of SNLEs, so we have
This yields an approximation of the correction vector and a new approximation of
It could also be written as
The above formula is called the Newton method. In practice, we never compute the inverse of the Jacobian matrix; instead, we solve systems of linear equations. The practical way to write the Newton method is
It’s worth noting that the Newton method has two function evaluations ,
, as well as a single system of linear equations solution for a single iteration. The number m is the CO of numerical iterative method without memory if
where, denotes error at the nth-step. Since
is unknown to us, it is very difficult to get the exact value of m by using the above definition. In practice, we computationally approximate the order of convergence as
The numerical iterative methods are classified into different classes. We call an iterative method a multi-point method [4, 5, 8–12] if the function and derivative evaluations are performed at different points in a single iteration. If all the functional evaluations are concentrated at a single point, then such a numerical method is called a single-point iterative method. Usually, the higher-order methods are multi-point and the lower order methods are single-point iterative methods. The multi-point methods are more efficient compared to single-point iterative methods when the CO is high. Another classification is single-step and multi-step methods [3, 10–12]. The computation of the Jacobian can be expensive. To avoid the new computation of Jacobian and its LU-factors, it is desirable to freeze Jacobian in a single iteration and develop a strategy to attain a high CO. A Newton multi-step and multi-point iterative method [3] could be formulated as
One can observe that the Jacobian is fixed and only function evaluations are performed. There is a single Jacobian evaluation and s-function evaluations. The CO of this method is . In the first step, we have the Newton method, which has a quadratic CO, and for each step of the loop, we increase the previous step order by 1, as the length of the loop is s−1. As a result, the CO is 2 + s−1 = s + 1. When solving a system of linear equations at each step, it is preferable to find the LU-factors for the frozen Jacobian
if possible. The corresponding lower and upper triangular systems are solved by backward and forward substitutions, which makes the overall computational cost economical.
In recent years, many researchers have contributed to developing numerical iterative schemes for SNLEs. A parameterized multi-point and multi-step method is proposed in [3]. The authors of [4] constructed a multi-step iterative method using Jacobian information at two different points. The discretization of boundary and initial value problems gave rise to SNLEs, which can be of a special type due to structure differential equations. Some methods in this direction are proposed in [5, 6]. For nonlinear equations, the optimality of the CO is well defined for the numerical iterative method, but such a notion is not well defined for the SNLEs. For instance, for a single nonlinear equation, the Newton method has quadratic convergence, and this CO is optimal according to the Kung-Traub’s (KT) conjecture [1]. According to the KT conjecture, if a numerical iterative method without memory has r-functional evaluations in a single stance of the method, then the optimal CO is
There is no KT-conjecture for iterative methods of solving SNLEs. Generally, if there are two functional evaluations, then the optimal CO is two. Similarly, if there are three functional evaluations, then the optimal order is four. In this context, an optimal order method of order four is proposed. In [11], another multi-step iterative method is proposed.
The development of numerical methods using a 2nd-order Fréchet derivative is not practical because the computational cost of a 2nd-order Fréchet derivative is very high as it is a tensor of rank three. But it is not the case when we focus on SNLEs associated with boundary value problems (BVPs). Consider a two-point 2nd–order BVP
where is a nonlinear function. After discretization, we get a SNLEs
where ,
, and
. Here
is the functional matrix for the 2nd order derivative and
is a partition of the interval
and the order vectors
and
. Consider that we have included the boundary conditions in (1). The Fréchet derivatives of (1) can be obtained as
where stands for the diagonal matrix. The
is a tensor of rank three. To convert it to a tensor of rank one, which is a vector, we need to multiply it by two tensors of rank one, i.e.,
where is element-wise multiplication operation. We can see that in the case of BVPs, the computation of 2nd order Fréchet derivative could be simple and computationally economical. By getting motivation from the economical perspective of a higher-order Fréchet derivative (where it is possible) for the discretized BVPs, we propose a single-point numerical iterative method. We concentrate on the case where the underlying nonlinearity is quadratic; for example, our method is valid when
in (1) is
. Now, we describe our single-point numerical iterative method for SNLEs
under the condition that
and
is a constant tensor of rank 3 and all higher order tensor of rank i are zeros tensors, i.e.,
. The CO of our proposed method (2) is four and the corresponding error equation is
There is one evaluation of SNLEs and one evaluation of Jacobian
, so in total we have two functional evaluations. We do not count the evaluation of
as it is a constant tensor of rank 3. Two matrix-vector multiplications, one scalar-vector multiplication, one LU decomposition of
, and three lower and upper triangular system solutions are available. Whereas the computational cost of
depends on the structure of quadratic nonlinearity.
2 Convergence analysis
For convergence analysis, we define
for . Using the Taylor’s expansion, we can expand the
because is a root,
. The higher derivatives of
are
Theorem 1. Suppose that is at least twice Fréchet differentiable in the nonempty open convex domain D and initial guess
is sufficiently close to
. Then,
converge to
with at least four CO.
Proof 2. To compute the value of , we need to compute the Taylor’s expansion of
as
The value of is
Similarly, the values of are
Substituting these values in
we get the error equation
Which shows that the CO of our proposed algorithm is four.
3 Numerical simulations
The implementation of our proposed method requires the discretization of BVPs, and we have adopted the Chebyshev pseudospectral method [7, 8] to get the operational matrices for the differentiation of different orders.
3.1 System of nonlinear equations in 2-D
To address the correctness of the order of convergence of our proposed algorithm, we solve a set of problems and compute the COC.
The aforementioned five systems of nonlinear equations have quadratic nonlinearity. The proposed numerical iterative method is used to solve these five problems. The convergence analysis for multiple geometric configurations commences with distinct initial guesses for each setup. For the line-circle interaction, the initial guess is set at [0.5,2.0], indicating an exploratory starting point in the two-dimensional space. Similarly, the circle-circle system begins with an initial guess of [0.5,0.1], slightly offset within the plane. The circle-parabola configuration introduces a more varied starting position with an initial guess of [−0.4,0.4], suggesting a potential intersection or tangent solution. The circle-hyperbola scenario is approached with an initial guess of [1.0,0.4], positioning the search in a region of potential interest. Lastly, the 4-D nonlinear system, which encapsulates a complex dynamic interaction, starts from a multi-dimensional guess of [0.5,0.5,0.5,0.5,], evenly distributing the initial probing across its four dimensions. These initial guesses are pivotal for the iterative methods employed in seeking the solutions, illustrating the tailored approach based on the geometric nature and dimensionality of each system. The computational orders of convergence are reported in Table 1. It is evident that the computational orders of convergence agree with the claimed OC of the proposed numerical iterative method (2).
The numerically computed solutions as follows: For the line-circle configuration, the solution is x1 = −0.4114 and x2 = 0.9114. In the case of the circle-circle interaction, the solution is given by x1 = −0.2071 and x2 = 1.2071. For the circle-parabola system, the computed values are x1 = −0.4698 and x2 = 0.8828. The circle-hyperbola configuration results in x1 = 1.2247 and x2 = 0.7071. Finally, the 4-D nonlinear system presents a more complex solution set with x1 = 0.81650, x2 = 0.81650, x3 = 0.81650, and x4 = −0.40825, illustrating the intricate dynamics involved in higher-dimensional systems.
One may check the second order Fréchet derivatives of SNLEs with quadratic nonlinearity is constant. Suppose, if and
are constant vectors with respect to
the second order Fréchet derivative is
which is a vector of dimension four.
3.1.1. Basin of attraction of proposed method.
We drew the basin of attraction [14–18] for solutions of SNLEs, assuming these solutions are simple, in order to investigate the dynamics of our suggested numerical iterative technique. For the basin of attraction, we set the parameters for our proposed method as follows: the number of iterations is ten, and the tolerance for the norm of the difference between the current iteration and the known solution is 0.01. The region for the basin of attraction is defined as . In Fig 1, a line and circle intersect in a manner that yields two simple solutions. The red and blue colored regions indicate that initial guesses from these areas converge to the solutions. The green-colored region denotes divergence of the iterative method given the parameter settings. Similarly, Figs 2, 3, and 4 show the basin of attraction for circle to circle, circle to parabola, and circle to hyperbola, respectively. The proposed method demonstrates simple dynamics for the given SNLEs. A Matlab implementation is given in Appendix.
3.2 Nonlinear boundary value problems
Here, we show that our proposed numerical scheme is applicable to solve SNLEs associated with nonlinear boundary value problems.
3.2.1 Blasius equation.
The Blasius equation is a nonlinear BVP with quadratic nonlinearity
The Blasius equation is defined over a semi-infinite interval [0 , ∞] but it can not simulate the solution over the semi-infinite interval. Assume x max is a large enough number to make the solution asymptotic. To solve the Blasius equation numerically, we must discretize the interval [0,xmax]. If there are n grid points, then and
for
. Grid points are not evenly spaced and are denser near the boundaries. The first-order Chebyshev differentiation matrix is D1, and the higher-order differentiation matrix is computed by multiplying D1 by m, where m is the order of differentiation. The rank of Dj is n − j If we denote the jth order differentiation matrix by
. The Chebyshev differentiation matrices perform poorly when the order of differentiation is high, but well when the order of differentiation is low. The matrices are full but offer high accuracy in the numerical approximation of derivatives. The discretized form of Blasius equation is
where 𝐲 is a n–dimensional vector and y i = y (xi ) for i =1 ,2 , ⋯ , n. All matrices B, M2, and M3 are n ×n in size. The operator ⊙ denotes the element-wise multiplication of vectors. The first, second, and last rows of matrices M3 and M2 are null. The matrix B is a null matrix except for first, second, and third rows. To implement boundary conditions, we create the matrix B and the right-hand side vector as
and . In all notations, we adopted the Matlab syntax. The computation of higher-order Fréchet derivatives is a necessary part of the implementation of numerical algorithms to solve the SNLEs associated with the discretized Blasius equation.
The implementation of our proposed algorithm is provided in the Appendix. For the numerical approximation of SNLEs associated with the Blasius equation, we take the zero-vector as an initial guess and perform six iterations. Table 2 contains the COC. Our numerical simulations confirm the theoretically claimed CO of our proposed method. The numerical solution of the Blasius equation is plotted along with its derivative in Figs 5 and 6.
3.3 Falkner–Skan equation
The Blasius boundary layer problem is generalised in Falkner and Skan’s equation [13] by viewing a uniform velocity field W0 as being divided by a wedge with an angle π β /2. The Falkner–Skan equation is
under the BCs
where −0.090429 ≤ β ≤4 /3 is wedge angle. When β=0 the Falkner-Skan equation becomes the Blasius equation. By adopting the notation of Blasius equations, we can write
where B and 𝐪 are identical to those used in Blasius equation. The higher-order Fréchet derivatives can be obtained as
We can observe that the 2nd-order Fréchet derivative is symmetric, i.e., F ″ (y ) h1 h2 . The convergence of our proposed method (2) for the numerical approximation of Falkner-Skan equation is depicted in Figs 7 and 8. Whereas, the COC is shown in Table 3.
In the coming subsections, we show that our proposed numerical method 2 is also applicable to solve the nonlinear problem of quadratic nonlinearity in cosmology and computational fluid dynamics.
3.4 Lane-Emden equation with index = 2
The Lane-Emden equation is a dimensionless Poisson’s equation for polytropic fluid that is spherically symmetric and self-gravitating under the Newtonian gravitational potential. The Lane-Emden equation is
under the BCs
where n is the index of the Lane-Emden equation. As we are dealing with quadratic nonlinearity, we take n = 2. Using the numerical iterative method 2, we calculate the numerical solution of the Lane-Emden equation and the results are depicted in Figs 9 and 10. To begin the numerical simulation, the initial guess is y =x, xi ∈ [0 , 6], and y (1 ) =1.
3.5 Nano-particles in fluid
A modelling equation for nono-particles in fluid is as follows
under the BCs
To simulate the equation (16), we assume the following values of the related parameters:
ks = 401, kf = 0.1613, ρs = 8933, ρf = 997.1, cps = 4179, S = 1, ϕ = 0.15, . The quadratic nonlinearity in (16) is f′ f″ ‒f f‴. The initial guess is a zero vector for the numerical simulation. To compute the numerical solution, the application of our proposed method is depicted in Figs 11 and 12.
3.6 Natural convection
A natural convection phenomena can be modelled as
under the BCs
The term in the preceding equation is quadratically nonlinear. The vector of 1’s is taken as an initial guess for the numerical simulations. For Pr = 1, the numerical solution for the natural convection problem is plotted in Fig 13 and derivative of numerically computed solution is plotted in Fig 14.
3.7 Partial differential equations with quadratic nonlinearity with inclusion of semi-linear terms
To test the validity of our proposed algorithm to deal with the quadratic nonlinearity for the boundary value problems in 2-D, we consider the following partial differential equation
We assume the solution . One can compute the right hand term
. To perform the simulations, we choose
. To make comparison of Newton and our propose method, we ran twenty simulations of each method and results are depicted in Table 4. The Table 4 shows that our proposed numerical iterative method is 1.7118 times faster than the classical Newton method. The numerically computed solution of 2-D partial differential equation and the absolute errors are shown in Figs 15 and 16, respectively. Matlab code of our proposed method for the solution of partial differential equation is provided in the appendix.
4 Conclusions
In the present research, we focused on the BVPs that have quadratic nonlinearity. There is a large class of physics problems that have quadratic nonlinearity by definition. To get the benefit of the quadratic nonlinearity, we developed a single-point numerical iterative method that uses the information of the 2nd-order Fréchet derivative. Our proposed method assumed that the 2nd-order Fréchet derivative should be constant, and hence all the higher-order Fréchet derivatives are zero tensors. The efficiency index of our method is higher than that of the Newton method because the Newton method attains CO 2 by using two function evaluations, whereas our method attains CO 4 with the same number of function evaluations. The Newton method needs to solve a single system of linear equations, while our method uses three systems of linear equations. To make the proposed method efficient, we calculate the LU factors of the Jacobian and then solve three lower and upper triangular systems. In all test examples, we have proved the validity of our proposed method. In all cases, the COC is almost four, which is aligned with the theoretically approved CO of our proposed method. Our proposed method is not valid when the nonlinearity is not quadratic.
Appendix
Maple implementation of our proposed method for the Blasius equation.
> restart;
> Digits := 1000;
> with(LinearAlgebra);
> n := 30;
> n := n - 1;
> x := Vector(n + 1, i -> cos(1.0*Pi*(i - 1)/n));
> c1 := Vector(n + 1, 1);
> c1[1] := 2.0;
> c1[n + 1] := 2.0;
> c2 := Vector(n + 1, i -> (-1)^(i - 1));
> c := Vector(n + 1, i -> c1[i]*c2[i]);
> X := Matrix(n + 1, n + 1, (j, i) -> x(j));
> XT := Matrix(n + 1, n + 1, (i, j) -> x(j));
> dX := X - XT;
> A := dX + 1.0*IdentityMatrix(n + 1);
> B := Matrix(n + 1, n + 1, (i, j) -> c(i)/(c(j)*A(i, j)));
> U := DiagonalMatrix(Vector(n + 1, i -> add(B[i, j],
j::integer = 1 .. n + 1)));
> M := U - B;
> x := Vector(n + 1, i -> x[n + 2 - i]);
> a := 0;
> b := 11;
> x := 0.5*‘+‘ ((b - a)*x, a + b);
> M1 := 2*M/(b - a);
> M2 := M1 . M1;
> M3 := M2 . M1;
> B := Matrix(n + 1);
> B[1, 1] := 1;
> B[2, 1 .. n + 1] := M1[1, 1 .. n + 1];
> B[n + 1, 1 .. n + 1] := M1[n + 1, 1 .. n + 1];
> M2[[1, 2, n + 1], 1 .. n + 1] := 0;
> M3[[1, 2, n + 1], 1 .. n + 1] := 0;
> Rhs := Vector(n + 1);
> Rhs[n + 1] := 1;
> y := Vector(n + 1);
> iter := 6;
> normf := Vector(iter + 1);
> M2y := M2 . y;
> f := (B . y) + (M3 . y) + 0.5*(y . M2y) - Rhs;
Matlab implementation of basin of attraction for our proposed method.
clc; clear all; close all;
% Define the four solutions to the system of equations
s1 = [-1.224745; -0.707107];
s2 = [-1.224745; 0.707107];
s3 = [1.224745; -0.707107];
s4 = [1.224745; 0.707107];
% Number of iterations for the proposed method and
tolerance for convergence
iter = 10;
tolerance = 0.01;
% Generate a grid of initial guesses for the proposed method
x_range = linspace(-1.5, 1.5, 400);
y_range = linspace(-1.5, 1.5, 400);
[X, Y] = meshgrid(x_range, y_range);
% Initialize a matrix to store the convergence status of
each initial guess
status = zeros(size(X));
% 0 indicates non-convergence,
%1-4 indicate convergence to one of the four solutions
% Loop over all initial guesses
for ix = 1:numel(X)
% Set the current initial guess
x = [X(ix); Y(ix)];
% Perform the iterative proposed method
% Check for convergence to one of the solutions
F = [x(1)^2 + x(2)^2 - 2; x(1)^2 - x(2)^2 - 1];
% Plotting the results
figure; hold on;
% Plot points corresponding to each of the four solutions
in different colors
scatter(X(status == 1), Y(status == 1), 1, ’r’, ’filled’,...
’DisplayName’, ’Solution 1’);
scatter(X(status == 2), Y(status == 2), 1, ’b’, ’filled’,...
’DisplayName’, ’Solution 2’);
scatter(X(status == 3), Y(status == 3), 1, ’m’, ’filled’,...
’DisplayName’, ’Solution 3’);
scatter(X(status == 4), Y(status == 4), 1, ’c’, ’filled’,...
’DisplayName’, ’Solution 4’);
% Plot divergent points with a distinct marker
scatter(X(status == 0), Y(status == 0), 25, ’g’, ’filled’,...
’DisplayName’, ’Divergence’);
% Draw the circle and hyperbola defined by the system of equations
fimplicit(@(x,y) x.^2 + y.^2 - 2, [-1.5 1.5 -1.5 1.5],
’LineWidth’, 2,...
’Color’,’k’, ’DisplayName’, ’Circle: x^2 + y^2 = 2’);
fimplicit(@(x,y) x.^2 - y.^2 - 1, [-1.5 1.5 -1.5 1.5],
’LineWidth’, 2,...
’Color’,’k’, ’DisplayName’, ’Hyperbola: x^2-y^2 = 1’);
% Label axes, show the legend, and set other plot properties
xlabel(’x’);
ylabel(’y’);
legend(’Location’, ’eastoutside’);
legend show;
axis equal;
grid on;
xlim([-1.5, 1.5]);
ylim([-1.5, 1.5]);
hold off;
Matlab implementation of proposed method for solving nonlinear partial differential equation
clc;
clear all;
close all;
format short g
% Define the partial differential equation
% alpha(x,y) * u_xx + beta(x,y) * u_yy + u^2 = f(x,y)
% The exact solution is u(x,y) = sin(x) * sin(y)
% The function f(x,y) is derived from this exact solution.
% Define the exact solution function
u = @(x,y) sin(x).*sin(y);
% Define coefficient functions alpha(x,y) and beta(x,y)
alpha = @(x,y) sin(x + y);
beta = @(x,y) cos(x + y);
% Define the source term f(x,y)
f = @(x,y) -(alpha(x,y) + beta(x,y)) * u(x,y) + u(x,y)^2;
% Define domain boundaries
ax = 0; bx = pi;
ay = 0; by = pi;
% Define number of grid points in x and y directions
nx = 60;
ny = 60;
% Indexing function to convert 2D index (i, j) to 1D index
eta = @(i,j) j + (i - 1) * ny;
% Define grid step sizes in x and y directions
hx = (bx - ax) / (nx - 1);
hy = (by - ay) / (ny - 1);
% Define grid points in x and y
x = (ax:hx:bx)’;
y = (ay:hy:by)’;
% Initialize matrices and vectors
n = nx * ny;
A = zeros(n); % Matrix for linear system
B = eye(n); % Identity matrix for Newton’s method
fvec = zeros(n, 1); % Source term vector
umat = zeros(nx, ny); % Initial guess matrix
% Loop through grid points to build matrix A and vector fvec
k = 1;
for i = 1:nx
for j = 1:ny
if (i == 1 || i == nx || j == 1 || j == ny)
% Boundary conditions: Dirichlet boundary conditions
A(k,k) = 1;
fvec(k) = u(x(i), y(j));
B(k,k) = 0;
else
% Fill matrix A for interior points using finite difference
A(k, eta(i-1,j)) = alpha(x(i), y(j)) / hx^2;
A(k, eta(i,j-1)) = beta(x(i), y(j)) / hy^2;
A(k, eta(i,j)) = -2 * (alpha(x(i), y(j)) / hx^2 + beta(x(i),
y(j)) / hy^2);
A(k, eta(i,j+1)) = beta(x(i), y(j)) / hy^2;
A(k, eta(i+1,j)) = alpha(x(i), y(j)) / hx^2;
% Fill the source term vector fvec
fvec(k) = f(x(i), y(j));
end
k = k + 1;
% Store the exact solution for later comparison
umat(i,j) = u(x(i), y(j));
end
end
% Remove small values (numerical noise) from A and fvec
A(abs(A) < 1.e-14) = 0;
fvec(abs(fvec) < 1.e-14) = 0;
% Newton’s method parameters
iter = 20; % Maximum number of iterations
tol = 1.0e-10; % Convergence tolerance
diagB = diag(B); % Diagonal of matrix B
% Timing setup for performance evaluation
time = 0; % Initialize total time
REPS = 10; % Number of repetitions for averaging time
% Loop for performance evaluation over multiple runs
for j = 1:REPS
tstart = tic; % Start timing for this repetition
U = umat(:); % Initial guess as a vector
% Newton’s method loop
for i = 1:iter
% Compute the residual vector F
F = A * U + diagB .* (U.^2) - fvec;
norm_F = norm(F); % Compute the norm of the residual
% Check for convergence
if (norm_F < tol)
disp(’success’);
iterations = i;
break;
end
% Compute the Jacobian matrix dF
dF = A + 2 * diag(diagB .* U);
Z = decomposition(dF, ’lu’); % LU decomposition
% Solve for phi1, phi2, and phi3 using LU decomposition
phi1 = Z F;
phi2 = Z (2 * diagB .* (phi1.^2));
phi3 = Z (2 * diagB .* (phi1 .* phi2));
% Update the solution vector U
U = U - phi1 - 0.5 * (phi2 + phi3);
end
telapsed = toc(tstart); % Time taken for this repetition
time = time + telapsed; % Accumulate total time
end
% Compute and display the average time over all repetitions
average_time = time / REPS;
disp([’Average time: ’, num2str(average_time)]);
% Reshape the solution vector U back to a matrix
W = reshape(U, nx, ny);
% Plot the approximated solution
figure
mesh(x, y, W’);
xlabel(’x-values’)
ylabel(’y-values’)
zlabel(’approximated u(x,y)’)
% Plot the absolute error between the exact and approximated
solutions
figure
mesh(x, y, abs(umat - W)’);
xlabel(’x-values’)
ylabel(’y-values’)
zlabel(’absolute error’)
References
- 1.
Traub JF. Iterative methods for the solution of equations. Prentice-Hall series in automatic computation. Englewood Cliffs, NJ: Prentice-Hall, Inc.; 1964.
- 2.
Ortega JM, Rheinboldt WC. Iterative solution of nonlinear equations in several variables. New York, London: Academic Press; 1970.
- 3. Ahmad F, Tohidi E, Carrasco JA. A parameterized multi-step Newton method for solving systems of nonlinear equations. Numer Algor. 2015;71(3):631–53.
- 4. Ahmad F, Tohidi E, Ullah MZ, Carrasco JA. Higher order multi-step Jarratt-like method for solving systems of nonlinear equations: application to PDEs and ODEs. Comput Math Appl. 2015;70(4):624–36.
- 5. Qasim U, Ali Z, Ahmad F, Serra-Capizzano S, Zaka Ullah M, Asma M. Constructing Frozen Jacobian iterative methods for solving systems of nonlinear equations, associated with ODEs and PDEs using the homotopy method. Algorithms. 2016;9(1):18.
- 6. Qasim S, Ali Z, Ahmad F, Serra-Capizzano S, Ullah MZ, Mahmood A. Solving systems of nonlinear equations when the nonlinearity is expensive. Comput Math Appl. 2016;71(7):1464–78.
- 7.
Shen J, Tang T, Wang L-L. Spectral methods: algorithms, analysis and applications. Springer Series in Computational Mathematics. Heidelberg: Springer; 2011.
- 8.
Szegö G. Orthogonal Polynomials. American Mathematical Society Colloquium Publications. New York: American Mathematical Society; 1939.
- 9. Soleymani F, Lotfi T, Bakhtiari P. A multi-step class of iterative methods for nonlinear systems. Optim Lett. 2013;8(3):1001–15.
- 10. Ullah MZ, Serra-Capizzano S, Ahmad F. An efficient multi-step iterative method for computing the numerical solution of systems of nonlinear equations associated with ODEs. Appl Math Comput. 2015;250:249–59.
- 11. Ullah MZ, Soleymani F, Al-Fhaid AS. Numerical solution of nonlinear systems by a general class of iterative methods with application to nonlinear PDEs. Numer Algor. 2013;67(1):223–42.
- 12. Montazeri H, Soleymani F, Shateyi S, Motsa SS. On a new method for computing the numerical solution of systems of nonlinear equations. J Appl Math. 2012;2012(1):751975.
- 13.
Falkner VM, Skan SW. Aero. res. coun. rep. and mem. no 1314. 1930.
- 14. Ardelean G. A comparison between iterative methods by using the basins of attraction. Appl Math Comput. 2011;218(1):88–95.
- 15. Bakhtiari P, Cordero A, Lotfi T, Mahdiani K, Torregrosa JR. Widening basins of attraction of optimal iterative methods. Nonl Dyn. 2016;87(2):913–38.
- 16. Geum YH. Basins of attraction for optimal third order methods for multiple roots. ams. 2016;10:583–90.
- 17. Basto M, Basto LP, Semiao V, Calheiros FL. Contrasts in the basins of attraction of structurally identical iterative root finding methods. Appl Math Comput. 2013;219(15):7997–8008.
- 18. Zotos EE, Suraj MS, Mittal A, Aggarwal R. Comparing the geometry of the basins of attraction, the speed and the efficiency of several numerical methods. Int J Appl Comput Math. 2018;4(4).