Return to Article Details Convergence analysis of iterative compositions in nonlinear modeling: exploring semilocal and local convergence phenomena

Convergence Analysis
of Iterative Compositions in Nonlinear Modeling:
Exploring Semilocal and Local Convergence Phenomena

Sunil Kumar, Janak Raj Sharma and Ioannis K. Argyros
(Date: February 02, 2024; accepted: May 13, 2025; published online: June 30, 2025.)
Abstract.

In this work, a comprehensive analysis of a multi-step iterative composition for nonlinear equations is performed, providing insights into both local and semilocal convergence properties. At each step three linear systems are solved in the method, but with the same linear operator. The analysis covers a wide range of applications, elucidating the parameters affecting both local and semilocal convergence and offering insightful information for optimizing iterative approaches in nonlinear model-solving tasks. Moreover, we assert the solution’s uniqueness by supplying the necessary standards inside the designated field. Lastly, we apply our theoretical deductions to real-world problems and show the related test results to validate our findings.

Key words and phrases:
Newton-type method, radius of convergence, Banach space, convergence, convergence order.
2005 Mathematics Subject Classification:
65Y20, 65H10, 47H17, 41A58
Department of Mathematics, University Centre for Research and Development, Chandigarh University, Mohali-140413, India, e-mail: sfageria1988@gmail.com.
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal 148106, Punjab, India, e-mail: jrshira@yahoo.co.in.
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA, e-mail: iargyros@cameron.edu.

1. Introduction

The challenges inherent in exploring systems of nonlinear equations within the field of applied mathematics exhibit a remarkable diversity. While the specific methods for attaining analytical solutions vary depending on the problem, iterative approaches [13, 3, 18, 11, 12] commonly find utility in approximating solutions across a wide spectrum of problems. Under some standard assumptions, a typical representation for a nonlinear system takes the mathematical form:

(1) F(x)=0,

where F:B0BB1, B, B1 are Banach spaces, and B0 is an open convex set.

One of the fundamental one-point methods is Newton’s method, which has quadratic convergence and is stated as

yn=xnF(xn)1F(xn),n=0,1,2,,

where x0B0 is the starting point and F:B0𝔏(B,B1) is the first Fŕechet derivative of F. Here, 𝔏(B,B1) denotes the set of bounded linear operators from B to B1. Many improved iterative methods have been presented and their convergence properties tested in Banach spaces (see, e.g., [14, 13, 3, 17, 19, 4, 20, 6, 15, 16, 1, 9, 10] and related references).

A method established in [7] that is defined for each n=0,1,2, by

(2) wn= xnF(xn)1F(xn),
yn= wnF(xn)1F(wn),
zn= 2wnyn
and xn+1= wnF(xn)1(3F(wn)+3F(yn)+2F(zn)),

has received significant attention in this paper. Notice that at each step in method (2) three linear systems are solved, but with the same linear operator. A favorable comparison of this method with several competing methods can be found in [7]. Its convergence order has been shown to be five by establishing the error equation

(3) en+1=(6A2A3A28A3A22+6A22A3+14A24)en5+𝒪(en6),

where en=xnx and Ai=1i!F(x)1F(i)(x), i=2,3,, using the approach of Taylor series expansion. But there are notable restrictions with this approach which limit the applicability of the method. The convergence order five is achieved in [7] for B=B1=m, (m is a natural number), and by assuming the existence of derivatives up to order five which are not used in (2). These conditions restrict the utilization of (2) to operators that are many times differentiable. Thus, there are even scalar equations for which the convergence of (2) cannot be assured. But the method (2) converges. Let us look at an example. Define the function F:[1.2,1.2] by F(t)=θ1t2logt+θ2t5+θ3t4 if t0, and F(t)=0, if t=0, where θ10, and θ2+θ3=0. It follows by these definitions that the numbers 0 and 1 belong to domain of F, and F(1)=0. But the function F(3) is not continuous at t=0. Hence, the results in [7] cannot assure that limnxn=1. But (2) converges to x if, e.g., θ1=θ2,θ3=1, and x0=1.15. This motivational example indicates that the conditions in [7] can be weakened. Moreover, there exist other limitations under with the usage of Taylor series.

In view of above discussion, the main motivation is to achieve the goal with weaker hypotheses rather than relying on earlier strong conditions. In the pursuit of enhancing the convergence characteristics, the present study investigates comprehensively the local and semilocal convergence analyses of (2).

Local convergence: Local convergence specifically addresses the behavior of an iterative method in the immediate vicinity of a solution. It explores the convergence properties within a small neighborhood around a solution point, providing a detailed analysis of how rapidly the iterative process refines its approximations when starting from nearby initial guesses. Understanding local convergence is important for assessing the robustness and effectiveness of an iterative algorithm in practical applications, where solutions are often sought in proximity to known or expected values.

Semilocal convergence: Semilocal convergence, on the other hand, refers to the behavior of an iterative method in a specific region of the solution space. Unlike global convergence, which considers convergence over the entire solution space, semilocal convergence focuses on the behavior of the iterative process within a limited neighborhood of a solution. It provides insights into how quickly the iterative scheme approaches a solution in a local region, offering valuable information about the convergence rate and efficiency near a specific point.

The rest of this article is organized as follows: the local convergence analysis is studied in Section 2, and the semilocal convergence analysis is studied in Section 3. Some special cases and applied problems are presented in Section 4 in order to further certify the theoretical deductions. In the end, the concluding remarks are added in Section 5.

2. Convergence 1: Local

We introduce some scalar functions that play an important role in the local analysis of convergence for the method (2). Set A=[0,+).

Suppose :

  • (T1)

    There exists a function φ0:AA which is continuous as well as nondecreasing (FCND) on the interval A such that the equation φ0(t)1=0 admits a smallest positive solution (SPS) denoted by s0. Set A0=[0,s).

  • (T2)

    There exists a FCND φ:A0A. Moreover, define functions with domain A0 and range + in turn by

    h1(t)=01φ((1θ)t)𝑑θ1φ0(t),
    α(t)={φ((1+h1(t))t)φ0(t)+φ0(h1(t)t),
    h2(t)=[01φ((1θ)h1(t)t)𝑑θ1φ0(h1(t)t)+α(t)(1+01φ0(θh1(t)t)𝑑θ)(1φ0(t))(1φ0(h1(t)t))]h1(t),
    h3(t)= [01φ((1θ)h1(t)t)𝑑θ1φ0(h1(t)t)+α(t)(1+01φ0(θh1(t)t)𝑑θ)(1φ0(t))(1φ0(h1(t)t))
    +2(1+01φ0(θh1(t)t)𝑑θ)(1φ0(t))]h1(t),
    β(t)= 5(1+01φ0(θh1(t)t)dθ)h1(t)+3(1+01φ0(θh2(t)t)dθ))h2(t)
    +2(1+01φ0(θh3(t)t)dθ))h3(t),

    and

    h4(t)=01φ((1θ)h1(t)t)𝑑θh1(t)1φ0(h1(t)t)+α(t)(1+01φ0(θh1(t)t)𝑑θ)h1(t)(1φ0(t))(1φ0(h1(t)t))+β(t)(1φ0(t)).
  • (T3)

    The equation hj(t)1=0 admits SPS in the interval A0 denoted by δj, respectively. Define the parameter δ as

    (4) δ=min{δj}.

    This parameter is shown to be a possible radius of convergence for the method (2) in Theorem 2.

    The functions φ0 and φ relate to the operators on the method.

  • (T4)

    There exist an invertible operator E and a solution xΩ such that

    E1(F(x)E)φ0(xx)for eachxΩ.

    Define the domain in D=ΩM(x,s).

  • (T5)

    E1(F(y)F(x))φ(yx) for each x,yD and

  • (T6)

    M[x,δ]Ω.

The conditions (T1)(T6) are employed to show the local analysis of convergence for the method (2).

Remark 1.

A usual choice for E=I the identity operator or E=F(x¯) for xΩ an auxiliary point other than x or E=F(x). In the latter case according to the condition (T3) the solution x is simple. However, this is not necessary the most flexible choice. Our approach proves the convergence of the method (2) to x even if the solution x is not simple provided that EF(x) and the equation has only one solution in Ω.

Next, the local analysis of convergence is established under the conditions (T1)(T6).

Theorem 2.

Suppose that the conditions (T1)(T6) hold and pick x0M(x,δ){x}. Then, the sequence {xn} generated by the method (2) is well defined in the ball M(x,δ), remains in M(x,δ) for each n=0,1,2,, and is convergent to x. Moreover, the following error estimates hold for each n=0,1,2,,

(5) wnx h1(ωn)ωnωn<δ,
(6) ynx h2(ωn)ωnωn,
(7) znx h3(ωn)ωnωn,
(8) xn+1x h4(ωn)ωnωn,

where, ωn=xnx, the functions hj are as previously defined and the radius δ is given by the formula (4).

Proof.

Assertions (5)–(8) are shown by induction. Pick vM(x,δ){x}. The application of the condition (T4) and (4) give in turn

(9) E1(F(v)E)φ0(vx)φ(δ)<1.

It follows by (9), and the Banach standard Lemma on linear operators [2] having inverses that F(v)𝔏(B,B0)) as well as

(10) F(v)1E11φ0(vx).

If v=x0, the iterates w0, y0, z0, and x1 are well defined by the four substeps of the method (2), respectively. We shall also show that they belong in the ball M(x,δ) in turn as follows:

(11) w0x =ω0F(x0)1F(x0)
=01F(x0)1(F(x+θ(ω0))F(x0))𝑑θ(ω0).

Using (4), (10), estimate (10) (for v=x0), and the definition of the function h1, we have in turn

(12) w0x 01φ((1θ)ω0)𝑑θω01φ0(ω0)
h1(ω0)ω0ω0<δ.

so the iterate w0M(x,δ), and the assertion (5) holds if n=0.

We need the estimates

(13) F(w0)=F(w0)F(x)=01F(x+θ(w0x))𝑑θ(w0x).

Hence, by the condition (T4)

(14) E1F(w0)= E1(01F(x+θ(w0x))𝑑θE+E)(w0x)
(1+01φ0(θw0x)𝑑θ)w0x,
F(w0)F(x0)=(F(w0)F(x))+(F(x)F(x0)),

so

(15) E1(F(w0)F(x0)) E1(F(w0)F(x))+E1(F(x0)F(x))
φ0(w0x)+φ0(ω0)
φ0(h1(ω0)ω0)+φ0(ω0)
α(ω0)=α0

or

(16) E1(F(w0)F(x0)) φ(w0x0)
φ(w0x+ω0)
φ((1+h1(ω0))ω0)α(ω0).

Then, we can write from the second substep of the method (2) in turn that

(17) y0x=w0xF(w0)1F(w0)+F(w0)1(F(x0)F(w0))F(x0)1F(w0).

By using (10) (for v=x0), (12)–(17), the condition (T5) and (4), we get in turn that

y0x
[01φ((1θ)w0x)𝑑θ1φ0(w0x)+α0(1+01φ0(θw0x)𝑑θ)(1φ0(ω0))(1φ0(w0x))]w0x
h2(ω0)ω0ω0.

Thus, the iterate y0M(x,δ) and the assertion (6) holds if n=0. Similarly, by the third substep of the method (2)

(18) z0x =w0xF(w0)1F(w0)+(F(w0)1+F(x0)1)F(w0)
=w0xF(w0)1F(w0)+F(w0)1(F(x0)F(w0))F(x0)1F(w0)
+2F(x0)1F(w0),

leading to

(19) z0x [01φ((1θ)w0x)𝑑θ1φ0(w0x)+α0(1+01φ0(θw0x)𝑑θ)(1φ0(ω0))(1φ0(w0x))
+2(1+01φ0(θw0x)𝑑θ)(1φ0(ω0))]w0x
h3(ω0)ω0ω0.

Hence, the iterate z0M(x,δ), and the assertion (7) holds if n=0. Moreover, by the last substep of the method (2), we have

(20) x1x= w0xF(w0)1F(w0)+F(w0)1(F(x0)F(w0))F(x0)1F(w0)
+5F(x0)1F(w0)3F(x0)1F(y0)2F(x0)1F(w0),

therefore

(21) x1x 01φ((1θ)w0x)𝑑θw0x1φ0(w0x)
+α0(1+01φ0(θw0x)𝑑θ)w0x(1φ0(ω0))(1φ0(w0x))
+β0(1φ0(ω0))h4(ω0)ω0ω0,

which shows the assertions (5)–(8) for n=0, and x0,w0,y0,z0,x1M(x,δ). But these calculations can be repeated provided we replace x0,w0,y0,z0,x1 by xm,wm,ym,zm,xm+1 (m a natural number), respectively. Thus, the induction is completed, and xm,wm,ym,zm,xm+1M(x,δ) for each m=0,1,2,.

Furthermore, it follows from the estimation

(22) ωm+1cωm<δ,

where c=h4(ω0)[0,1), that limmxm=x as well as xm+1M(x,δ).

In the next result we determine a domain that contains only x as a solution.

Proposition 3.

Suppose: The condition (T4) holds on the ball M(x,s1) for some s1>0, and there exists s2>s1 such that

(23) 01φ0(θs2)𝑑θ<1.

Define the domain D1=ΩM[x,s2]. Then, the equation F(x)=0 is uniquely solvable by x in the domain D1.

Proof.

Let us assume that there exists x¯D1 solving the equation F(x)=0. Define the linear operator Q=01F(x+θ(x¯x))𝑑θ. Then, it follows by the condition (T4), and (23)

E1(QE)01φ0(θx¯x)𝑑θ01φ0(θs2)𝑑θ<1.

Hence, the linear operator Q1𝔏(B,B0). Moreover, from the identity

x¯x=Q1(F(x¯)F(x))=Q1(0)=0,

we deduce x¯=x.

Remark 4.

Clearly, we can choose s1=δ in Proposition 3.

3. Convergence 2: Semi-local

The roll of x and the functions φ0,φ are exchanged by x0, and the functions ψ0,ψ, respectively.

Suppose:

  • (H1)

    There exists FCND ψ0:AA, such that the equation ψ0(t)1=0 has a SPS denoted by s1.

    Set A1=[0,s1).

  • (H2)

    There exists FCND ψ:A1A. Define the sequence {an} for a0=0, some b00, and each n=0,1,2, by

    ϱn= 01ψ((1θ)(bnan))𝑑θ(bnan),
    cn= bn+ϱn1ψ0(an),
    (24) dn= cn+2(cnbn),
    pn= 01ψ((1θ)(cnan))𝑑θ(cnan)+(1+ψ0(an))(cnbn),
    qn= 01ψ((1θ)(dnan))𝑑θ(dnan)+(1+ψ0(an))(dnbn),
    an+1= dn+2ϱn+3pn+2qn1ψ0(an),
    μn+1= 01ψ(θ(an+1an))𝑑θ(an+1an)+(1+ψ0(an))(an+1bn),
    and
    bn+1= an+1+μn+11ψ0(an+1).

    The sequence {an} is shown to be majorizing for the method (2) in Theorem 6. But first we need a general convergence condition for it.

  • (H3)

    There exists s0[0,s1) such that for each n=0,1,2,

    ψ0(an)<1,andans0.

    It follows by this condition and (24) that

    0anbncndnan+1<s0,

    and there exists a[0,s0] such that limn+an=a. Notice that a is the least upper bound of the sequence {an} which is unique.

    As in the local analysis, we connect the functions ψ0 and ψ to be operators on the method (2).

  • (H4)

    There exists an invertible operator E and a point x0Ω such that E1(F(x)E)ψ0(xx0) for each xΩ. Notice that for x=x0, the definition of s1 and this condition imply

    E1(F(x0)E)ψ0(0)<1.

    So, the linear operator F(x0)1𝔏(B0,B). Hence, we can set b0F(x0)1F(x0). Define the domain D2=ΩM(x0,s1).

  • (H5)

    E1(F(y)F(x))ψ(yx) for each x,yD2.

    and

  • (H6)

    M[x0,a]Ω.

Remark 5.

Similar remarks as in Remark 1 follow, and E=F(x0) is a possible choice.

In the next result, we develop the semi-local analysis of convergence for the method (2) under the conditions (H1)(H6).

Theorem 6.

Suppose that the conditions (H1)(H6) hold. Then, the sequence {xn} generated by (2) is well defined in the ball M(x0,a) remains in M(x0,a) for each n=0,1,2,, and is convergent to a solution xM[x0,a] of the equation F(x)=0 such that

(25) xnxaan.
Proof.

The following claims are demonstrated using induction

(26) wnxnbnan,
(27) ynwncnbn,
(28) znyndncn,

and

(29) xn+1znan+1dn.

The assertions (26)-(29) are shown using induction. By the condition (H4), the definition of b0, and the first substep of the method (24), we have w0x0=F(x0)1F(x0)b0=b0a0<a. So, the iterate w0M(x0,a), and the assertion (26) holds in n=0. Let vM(x0,a).

Then, the definition of s1 and the condition (T4) imply

E1(F(v)E)ψ0vx0<1,

thus F(v)1𝔏(B,B0) and

(30) F(v)1L11ψ0(vx0).

Notice that by the existence of F(x0)1 the iterates w0,y0,z0 and x1 are well defined by the four substeps of the method (2), respectively. Next, we need in turn the estimates

F(wm)=F(wm)F(xm)F(xm)(wmxm),

and by the conditions (H5), (30) (if v=xm)

E1F(wm)= 01E1(F(xm+θ(wmxm))F(xm))𝑑θ(wmxm)
01v(θwmxm)𝑑θwmxm=ϱ¯m
01v(θbmam)𝑑θbmam=ϱm,
ymwm= F(xm)1F(wm),
ymwm F(xm)1EE1F(wm)
ϱ¯m1ψ0(xmx0)ϱm1ψ0(am)=cmbm,
ymx0 ymwm+wmx0
cmbm+bma0=cm<a,
zmym= 2F(xm)1F(wm)=2(ymwm),
zmym 2ymwm2(cmbm)=dmcm,
zmx0 zmym+ymx0dmcm+cma0=dm<a,
F(ym)= F(ym)F(xm)F(xm)(wmxm)
= F(ym)F(xm)F(xm)(ymxm)+F(xm)(ymwm),
(31) E1F(ym) 01E1(F(xm+θ(ymxm))F(xm))dθ(ymxm)
+E1(F(xm)E+E)(ymxm)
01ψ(θymxm)𝑑θymxm
+(1+ψ0(xmx0))ymwm
= p¯m01ψ(θ(cmam))𝑑θ(cmam)
+(1+ψ0(am))(cmbm)=pm.

Similarly by exchanging ym by zm in the previous calculation, and using

F(zm)=F(zm)F(xm)F(xm)(zmxm)+F(xm)(zmxm),

we get

E1F(zm) q¯mqm,
xm+1zm= 2F(xm)1F(wm)3F(xm)1F(ym)2F(xm)1F(zm),
xm+1zm 2F(xm)1EE1F(wm)+3F(xm)1EE1F(ym)
+2F(xm)1EE1F(zm)
2ϱ¯m+3p¯m+2q¯m1ψ0(xmx0)2ϱm+3pm+2qm1ψ0(am)=am+1dm,
t0 xm+1zm+zmx0
am+1dm+dma0=am+1<a,
F(xm+1)= F(xm+1)F(xm)F(xm)(wmxm)
= F(xm+1)F(xm)F(xm)(tm)+F(xm)(tm)
F(xm)(wmxm),
= F(xm+1)F(xm)F(xm)(tm)+F(xm)(xm+1wm)
(32) E1F(xm+1) 01E1(F(xm+θ(tm))F(xm))dθ(tm)
+E1(F(xm)E+E)(tm)
01ψ(θtm)dθtm+(1+ψ0(xmx0))xm+1wm
= μ¯m+1
01ψ(θ(am+1am))𝑑θ(am+1am)
+(1+ψ0(am))(am+1bm)=μm+1,
wm+1xm+1 F(xm+1)1EE1F(xm+1)
μ¯m+11ψ0(t0)μm+11ψ0(am+1)=bm+1am+1,
wm+1x0 wm+1xm+1+t0
bm+1am+1+am+1a0=bm+1<a,

t where tm=xm+1xm

Hence, the assertions (26)–(29) hold for each m=0,1,2,, and all iterates xm, wm, ym, zm belong in M(x0,a). Moreover, it follows by the condition (H3) the sequence {am} is Cauchy as convergent. Then, by the triangle inequality and (26)–(29)

tm xm+1zm+zmym+ymwm+wmxm
am+1dm+dmcm+cmbm+bmam=am+1am,

i.e.,

(34) tm am+1am.

Hence, the sequence {am} is also Cauchy in the Banach space B0, and consequently, there exists xM[x0,a] such that limm+xm=x. Furthermore, using the continuity of operator F, and by letting m+ in the estimate (32) we deduce that F(x)=0. Let k=0,1,2,. Then, by (34), we obtain

(35) tmam+kam.

Finally, by letting k+ in (35), we get the assertion (25). ∎

The uniqueness ball is determined in the next result.

Proposition 7.

Suppose: There exists a solution x¯M(x0,s3) of the equation F(x)=0 for some s3>0. The condition (H4) holds on the ball M(x0,s3) and there exists s4s3 such that

(36) 01ψ0((1θ)s3+θs4)𝑑θ<1.

Define the domain D3=ΩM[x0,s4]. Then, the only solution of the equation F(x)=0 in the domain D3 is x¯.

Proof.

Let uD3 be such that F(u)=0. Define the linear operator Q1=01F(x¯+θ(ux¯))𝑑θ. It follows by the condition (H4), and (30) that

E1(Q1E) 01ψ0((1θ)x¯x0+θux¯)𝑑θ
01ψ0(θs4+(1θ)s3)𝑑θ<1.

Then, the identity

ux¯=Q11(F(u)F(x¯))=Q1(0)=0,

we conclude u=x¯.

Remark 8.
  • (i)

    In the condition (H6), the limit point a can be replaced by s0.

  • (ii)

    In Proposition 7, we can set x¯=x, and s3=a under all the conditions of Theorem 6.

4. Numerical results

The numerical tests contribute to a deeper understanding of the convergence properties of iterative compositions, enhancing the practical applicability and theoretical foundation of nonlinear modeling techniques. In view of this, here we verify the theoretical results proven in the preceding sections. Let us consider the following problems:

Example 9.

Consider the equation

(37) F(x)=xβsin(x)K=0,

where 0β<1, 0Kπ, that comes from Kepler’s [5]. In [5], several options are provided for the values of β and K. Specifically, the approximate solution to (37) is x0.13320215 for K=0.1 and β=0.25. Let D=S(x,c) be the initial approximation such that x(0)=34D, with c being a positive constant. Now, we have

F(x)= 1βcos(x).

Thus, x,yD, we get the approximation,

|F(x)1(F(x)F(y))|= |β(cos(x)cos(y))||1βcos(x)|
= 2|β||sin(x+y2)sin(xy2)||1βcos(x)|
L0|xy|,

and

|F(x(0))1(F(x)F(y))|L1|xy|,

where L0=|β||1βcos(x)|0.332352 and L1=|β||1βcos(x(0))|0.305968.

The aforesaid approximations lead to the estimation of parameters utilised in the conditions of Section 2 and Section 3. The parameters listed in (T1)(T4) are given as

φ0(t)=L0t,φ(t)=L0t,

and

δ=min{2.00591,1.45636,1.01559,0.413765}=0.413765.

Moreover, the parameters defined in (H1)(H5) are chosen as

ψ0(t)=L0t,ψ(t)=L0t,b0=0.0122

and consequently, we obtain the sequence {an} as

{an}n1={0.0126708,0.0131713,0.0131721,},

which converges to a0.0132<s1=3.00886.

Example 10.

The norm x=max1im|xi| for every x=(x1,x2,,xm)Tm and matrix norm A=max1imj=1j=m|aij| for any A=(aij)1i,jm𝔏(m). We can take the domain m, for every m2.

On a closed interval [0,1], define the boundary value problem as

(38) x′′(t)=x(t)2,x(0)=x(1)=0.

Taking into consideration the partitioning of [0,1] with a sub-interval of length h=1/k as

t0=0<t1<t2<<tk1<tk=1

in order to convert the equation (38) into a finite dimensional problem.

Denoting xi=x(ti) i, and by finite differences

xi′′xi+12xi+xi1h2,

i=1,2,,k1, equation (38) reduces into nonlinear system, F:Dk1k1, given by

(39) xi+12xi+h2xi2+xi1=0,i=1,2,3,,k1,

where x0=0=xk. Now at x=(x1,x2,,xk1)TD the Fŕechet derivative is given as follows:

F(x)=(2h2x1210012h2x2210012h2x3200002h2xk12).

In specifically, we select k=101 to find the parameters provided in the Section 2 and Section 3 to convert (39) to a system of 100 equations fulfilling the solution x=(0,100,0)T.

Furthermore, we choose the initial estimate as x0=(0.5,100,0.5)TD, treating the domain D=S(x,c) with as an open ball for some positive constant c. Then, we can determine that

F(x)1(F(x)F(y))L0xy,

and

F(x0)1(F(x)F(y))L1xy,

where L0=0.24999 and L1=0.27896, for any x,yD.

The parameters listed in Section 2 under (T1)(T4) conditions for the local convergence analysis are selected as follows in view of the aforementioned approximations:

φ0(t)=L0t,φ(t)=L0t,

and consequently, we have that

δ=min{2.666773,1.936172,1.350181,0.550085}=0.550085.

Additionally, for the semilocal convergence analysis, the parameters defined in Section 3 under conditions (H1)(H5) are selected as

ψ0(t)=L0t,ψ(t)=L0t,b0=0.025

and consequently, we have the sequence {an} as

{an}n1={0.0264888,0.0280819,0.0280883,},

which converges to a0.0281<s1=4. These results confirm Section 2 and Section 3 conditions.

Example 11.

Let C[0,1] stand for the continuous function space with norm x=sup0t1|x(t)| for each xC[0,1] and defined on the domain as a closed unit interval [0,1]. Let D={xC[0,1],x<1} and nonlinear mapping (see [8]) F:DC[0,1] as

(40) F(x)(t)=x(t)μ01k(s,t)x(s)3𝑑s,t[0,1],xD,

where μ, and the kernel k(s,t) is given as

k(s,t)={(1s)t,ts,s(1t),st,

that satisfies the following,

01k(s,t)𝑑s18.

Moreover, the Fréchet derivative of (40) is given by

F(x)κ(t)=κ(t)3μ01k(s,t)x(s)2κ(s)𝑑s,κD.

Note that solution of (40) is x=0 and also satisfies F(x)=I. Then, for x,yD, we have,

F(x)1(F(x)F(y)) 3|μ|01k(s,t)(x(s)2y(s)2)κ(s)𝑑s
L0xy,

where L0=3|μ|4.

Furthermore, for the x(0)D which is given as x(0)(t)=12, t[0,1], and the estimation

IF(x(0) 3|μ|01k(s,t)x(0)(s)2κ(s)𝑑s3|μ|32,

it is calculated that F(x(0))132323|μ|, provided |μ|<323. Therefore, x,yD, we get

F(x(0))1(F(x)F(y))L1xy,

and

F(x(0))1F(x(0))L2,

where L1=24|μ|323|μ| and L2=(1+|μ|32)16323|μ|.

We particularly fix μ=12, in the above approximations, for parameters listed in Section 2 and Section 3. The parameters used in the conditions (T1)(T4) are defined as

φ0(t)=L0t,φ(t)=L0t,

and so

δ=min{1.77778,1.29073,0.900085,0.366709}=0.366709.

Furthermore, the parameters defined in (H1)(H5) are chosen as

ψ0(t)=L0t,ψ(t)=L0t,b0=0.0156

and consequently, we obtain the sequence {an} as

{an}n1={0.0164694,0.0173984,0.0174017,},

which converges to a0.0174<s1=2.66667. These results confirm Section 2 and Section 3 conditions.

5. Conclusion

Comprehensive analysis is conducted on a fifth-order iterative technique to assess its local and semilocal convergence in Banach Spaces. In contrast to the conventional reliance on Taylor series expansions, this study establishes generalized convergence results based solely on assumptions about first-order derivatives. The presented analysis introduces a fresh perspective for examining the convergence of the iterative method, focusing exclusively on the operators inherent in the given iterative processes. Unlike earlier studies, which incorporated higher-order derivatives not present in the methods under consideration, this approach acknowledges the potential non-existence of such derivatives. Consequently, previous results do not provide a definitive guarantee of convergence, even though it may occur. This innovative approach effectively broadens the applicability of the given method to a more extensive range of problems. Rigorous testing on applied problems lends support to the validity of the developed results. A noteworthy observation is that the analytical technique employed in this study has broader applicability and could be extended to enhance the effectiveness of other methods in a similar manner.

References