<!DOCTYPE html>
<html lang="en">
<head>
<script>
  MathJax = { 
    tex: {
		    inlineMath: [['\\(','\\)']]
	} }
</script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<meta name="generator" content="plasTeX" />
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Convergence of Halley’s method under centered Lipschitz condition on the second Fréchet derivative: Convergence of Halley’s method under centered Lipschitz condition on the second Fréchet derivative</title>
<link rel="stylesheet" href="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/styles/theme-white.css" />
</head>

<body>

<div class="wrapper">

<div class="content">
<div class="content-wrapper">


<div class="main-text">




<div class="titlepage">
<h1>Convergence of Halley’s method under centered Lipschitz condition on the second Fréchet derivative</h1>
<p class="authors">
<span class="author">Ioannis K. Argyros\(^\ast \) Hongmin Ren\(^\S \)</span>
</p>
<p class="date">March 26, 2012.</p>
</div>
<p>\(^\ast \)Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA, e-mail: <span class="tt">iargyros@cameron.edu</span>. </p>
<p>\(^\S \)College of Information and Engineering, Hangzhou Polytechnic, Hangzhou 311402, Zhejiang, PR, e-mail: <span class="tt">rhm65@126.com</span>. </p>

<div class="abstract"><p> We present a semi-local as well as a local convergence analysis of Halley’s method for approximating a locally unique solution of a nonlinear equation in a Banach space setting. We assume that the second Fréchet-derivative satisfies a centered Lipschitz condition. Numerical examples are used to show that the new convergence criteria are satisfied but earlier ones are not satisfied. </p>
<p><b class="bf">MSC.</b> 65G99, 65J15, 65H10, 47H17, 49M15 </p>
<p><b class="bf">Keywords.</b> Halley’s method, Banach space, semi-local convergence, Fréchet-derivative, centered Lipschitz condition. </p>
</div>
<h1 id="a0000000002">1 Introduction</h1>
<p> In this study we are concerned with the problem of approximating a locally unique solution \(x^\star \) of the nonlinear equation </p>
<div class="equation" id="eq1.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.1} F(x)=0, \end{equation}
  </div>
  <span class="equation_label">1.1</span>
</p>
</div>
<p> where \(F\) is twice Fréchet-differentiable operator defined on a nonempty open and convex subset of a Banach space \(X\) with values in a Banach space \(Y\). </p>
<p>Many problems from computational sciences and other disciplines can be brought in a form similar to equation (1.1) using mathematical modelling <span class="cite">
	[
	<a href="#Arg1" >1</a>
	, 
	<a href="#Arg2" >2</a>
	, 
	<a href="#Deu" >6</a>
	]
</span>. The solutions of these equations can be rarely be found in closed form. That is why most solution methods for these equations are iterative. The study about convergence matter of iterative procedures is usually based on two types: semi-local and local convergence analysis. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. </p>
<p>In the present study we provide a convergence analysis for Halley’s method defined by <span class="cite">
	[
	<a href="#Arg3" >3</a>
	, 
	<a href="#Arg3.5" >4</a>
	, 
	<a href="#Arg4" >5</a>
	, 
	<a href="#Xu" >8</a>
	]
</span> </p>
<div class="equation" id="eq1.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.2} x_{n+1}=x_n-\Gamma _F(x_n)F^\prime (x_n)^{-1}F(x_n),\  \text{for each}\; n=0,1,2,\ldots , \end{equation}
  </div>
  <span class="equation_label">1.2</span>
</p>
</div>
<p> where, \(\Gamma _F(x)=(I-L_F(x))^{-1}\) and \(L_F(x)=\tfrac {1}{2}F^\prime (x)^{-1}F^{\prime \prime }(x)F^\prime (x)^{-1}F(x)\). The convergence of Halley’s method has a long history and has been studied by many authors (cf [1-5,7,8] and the references therein). The most popular conditions for the semi-local convergence of Halley’s method are given by<br />\((C_1)\) There exists \(x_0\in D\) such that \(F^\prime (x_0)^{-1}\in L(Y,X)\), the space of bounded linear operator from \(Y\) into \(X\);<br />\((C_2)\quad \| F^\prime (x_0)^{-1}F(x_0)\| \le \eta \);<br />\((C_3)\quad \| F^\prime (x_0)^{-1}F^{\prime \prime }(x)\| \le M\)&#8195;for each \(x\) in \(D\);<br />\((C_4)\quad \| F^\prime (x_0)^{-1}[F^{\prime \prime }(x)-F^{\prime \prime }(y)]\| \le K\| x-y\| \) &#8195;for each \(x\) and \(y\) in \(D\).<br />The corresponding sufficient convergence condition is given by </p>
<div class="equation" id="eq1.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.3} \eta \le \tfrac {4K+M^2-M\sqrt{M^2+2K}}{3K(M+\sqrt{M^2+2K})}. \end{equation}
  </div>
  <span class="equation_label">1.3</span>
</p>
</div>
<p> There are simple examples show that \((C_4)\) is not satisfied. As an example, let \(X=Y=\mathbb R\), \(D=[0,+\infty )\) and define \(F(x)\) on \(D\) by<br /></p>
<div class="displaymath" id="a0000000003">
  \[  F(x)=\tfrac {4}{15}x^{\tfrac {5}{2}}+x^2+x+1.  \]
</div>
<p> Then, we have that </p>
<div class="displaymath" id="a0000000004">
  \[  |F^{\prime \prime }(x)-F^{\prime \prime }(y)|=|\sqrt{x}-\sqrt{y}|=\tfrac {|x-y|}{\sqrt{x}+\sqrt{y}}.  \]
</div>
<p> Therefore, there is no constant \(K\) satisfying \((C_4)\). Other examples where \((C_4)\) is not satisfied can be found in [2]. We shall use the weaker than \((C_3)\) and \((C_4)\) conditions given by<br />\((C_3)^\prime \quad \| F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)\| \le \beta \);<br />\((C_4)^\prime \quad \| F^\prime (x_0)^{-1}[F^{\prime \prime }(x)-F^{\prime \prime }(x_0)]\| \le L\| x-x_0\| \) &#8195;for each \(x\) in \(D\).<br />Note that in this case for \(x_0{\gt}0\) </p>
<div class="displaymath" id="a0000000005">
  \[  |F^{\prime \prime }(x)-F^{\prime \prime }(x_0)|\le \tfrac {|x-x_0|}{\sqrt{x_0}}\quad for \; each\; x\;  in\;  D.  \]
</div>
<p> Hence, we can choose \(L=|F^\prime (x_0)^{-1}|\tfrac {1}{\sqrt{x_0}}\). A semi-local convergence under conditions \((C_1),(C_2),(C_3)^\prime \) and \((C_4)^\prime \) has been given by Xu in [8] using recurrent relations. However, the semi-local analysis is false under the stated hypotheses. In fact, the following semi-local convergence theorem was established in Ref. <span class="cite">
	[
	<a href="#Xu" >8</a>
	]
</span>. </p>
<p><div class="theorem_thmwrapper " id="a0000000006">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">1</span>
  </div>
  <div class="theorem_thmcontent">
  <p>Let \(F:D\subset X\rightarrow Y\) be continuously twice Fréchet differentiable, \(D\) open and convex. Assume that there exists a starting point \(x_0\in D\) such that \(F^\prime (x_0)^{-1}\) exists, and the following conditions hold:<br />\((C_2)\) \(\| F^\prime (x_0)^{-1}F(x_0)\| \le \eta \);<br />\((C_3)^\prime \) \(\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)\| \le \beta \);<br />\( condition\  (C_4)^\prime \) is true;<br />\(\tfrac {1}{2}\beta \eta {\lt}\tau \), where </p>
<div class="equation" id="eq1.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.4} \tau =\tfrac {3s^\star +1-\sqrt{7s^\star +1}}{9s^\star -1}=0.134065\ldots , \end{equation}
  </div>
  <span class="equation_label">1.4</span>
</p>
</div>
<p> \(s^\star =0.800576\ldots \) such that \(q(s^\star )=1\), and </p>
<div class="equation" id="eq1.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.5} q(s)=\tfrac {(6s+2)-2\sqrt{7s+1}}{(6s-2)+\sqrt{7s+1}}(1+\tfrac {s}{1-s^2});\\ \end{equation}
  </div>
  <span class="equation_label">1.5</span>
</p>
</div>
<p> \(\overline{U}(x_0,R)\subset D\), where \(R\) is the positive solution of </p>
<div class="equation" id="eq1.6">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.6} Lt^2+\beta t-1=0. \end{equation}
  </div>
  <span class="equation_label">1.6</span>
</p>
</div>
<p> Then, the Halley sequence \(\{ x_k\} \) generated by <span class="rm">(1.2)</span> remains in the open ball \(U(x_0,R)\), and converges to the unique solution \(x^\star \in \overline{U}(x_0,R)\) of Eq. <span class="rm">(1.1)</span>. Moreover, the following error estimate holds </p>
<div class="equation" id="eq1.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.7} \| x^\star -x_k\| \le \tfrac {a}{c(1-\tau )\gamma }\sum _{i=k+1}^{\infty }\gamma ^{2^i}, \end{equation}
  </div>
  <span class="equation_label">1.7</span>
</p>
</div>
<p> where \(a=\beta \eta \), \(c=\tfrac {1}{R}\) and \(\gamma =\tfrac {a(a+4)}{(2-3a)^2}\). </p>

  </div>
</div> </p>
<p>We provide an example to show the results of the above theorem does not hold under the stated hypotheses. </p>
<p><div class="example_thmwrapper " id="a0000000007">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">2</span>
  </div>
  <div class="example_thmcontent">
  <p>Let us define a scalar function \(F(x)=20x^3-54x^2+60x-23\) on \(D=(0,3)\) with initial point \(x_0=1\). Then, we have that </p>
<div class="equation" id="eq1.8">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.8} F^\prime (x)=12(5x^2-9x+5),\quad F^{\prime \prime }(x)=12(10x-9). \end{equation}
  </div>
  <span class="equation_label">1.8</span>
</p>
</div>
<p> Hence, we obtain \(F(x_0)=3\), \(F^\prime (x_0)=12\), \(F^{\prime \prime }(x_0)=12\). We can choose \(\eta =\tfrac {1}{4}\) and \(\beta =1\) in Theorem 1.1. Moreover, we have for any \(x\in D\) that </p>
<div class="equation" id="eq1.9">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.9} |F^\prime (x_0)^{-1}[F^{\prime \prime }(x)-F^{\prime \prime }(x_0)]|=10|x-x_0|. \end{equation}
  </div>
  <span class="equation_label">1.9</span>
</p>
</div>
<p> That is, the center Lipschitz condition \((C_4)^\prime \) is true for constant \(L=10\). We can also verify condition \(\tfrac {1}{2}\beta \eta =\tfrac {1}{8}{\lt}\tau =0.134065\ldots \) is true. By (1.6), we get </p>
<div class="equation" id="eq1.10">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.10} R=\tfrac {\sqrt{\beta ^2+4L}-\beta }{2L}=\tfrac {\sqrt{41}-1}{20}=0.270156\ldots . \end{equation}
  </div>
  <span class="equation_label">1.10</span>
</p>
</div>
<p> Then, condition \(\overline{U}(x_0,R)=[x_0-R,x_0+R]\approx [0.729844,1.270156]\subset D\) is also true. Hence, all conditions in Theorem 1.1 are satisfied. However, we can verify that the point \(x_1\) generated by the Halley’s method (1.2) doesn’t remain in the open ball \(U(x_0,R)\). In fact, we have that </p>
<div class="equation" id="eq1.11">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.11} |x_1-x_0|=\tfrac {|F^\prime (x_0)^{-1}F(x_0)|}{|1-\tfrac {1}{2}F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)F^\prime (x_0)^{-1}F(x_0)|}=\tfrac {2}{7}=0.285714\ldots >R. \end{equation}
  </div>
  <span class="equation_label">1.11</span>
</p>
</div>
<p> Clearly, the rest of the conclusions of Theorem 1.1 cannot be reached. <span class="qed">â–¡</span></p>

  </div>
</div> </p>
<p>We use a different approach than recurrent relations in our semi-local convergence analysis. The paper is organized as follows: Section 2 contains the semi-local convergence of Halley’s method, in Section 3 the local convergence is given, whereas the numerical examples are presented in the concluding Section&#160;4. </p>
<h1 id="a0000000008">2 Semi-local convergence analysis</h1>
<p> We present the semi-local convergence analysis of Halley’s method in a different way than in [1]. Let \(\eta {\gt}0\), \(\beta \ge 0\) and \(L{\gt}0\). Set \(R=\tfrac {2}{\beta +\sqrt{\beta ^2+4L}}\). Then, we have that </p>
<div class="displaymath" id="a0000000009">
  \[  LR^2+\beta R=1  \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000010">
  \[  Lt^2+\beta t{\lt}1,\  \text{for any}\  t\in (0,R).  \]
</div>
<p> Suppose that </p>
<div class="equation" id="eq2.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.1} \eta <\tfrac {R}{1+\tfrac {\beta R}{2}}=\tfrac {2}{2\beta +\sqrt{\beta ^2+4L}}, \end{equation}
  </div>
  <span class="equation_label">2.12</span>
</p>
</div>
<p> which is equivalent to </p>
<div class="displaymath" id="a0000000011">
  \[  \eta _0{\lt}R,  \]
</div>
<p> where </p>
<div class="displaymath" id="a0000000012">
  \[  \eta _0=\tfrac {\eta }{1-a},\quad a=\tfrac {1}{2}\beta \eta {\lt}1.  \]
</div>
<p> Define function \(\phi (t)\) on \([0,R]\) by </p>
<div class="displaymath" id="a0000000013">
  \[  \begin{array}{lll} \phi (t)& =& 2t^2[1-(Lt+\beta )t]^2-2t^2[1-(Lt+\beta )t](Lt+\beta )\eta _0\\ & & -2t[1-(Lt+\beta )t]^2\eta _0-(Lt+\beta )t\eta _0^2+(Lt+\beta )\eta _0^3\\ & =& 2t^2[1-(Lt\! +\! \beta )t]^2-2t[1-(Lt\! +\! \beta )t]\eta _0-(Lt\! +\! \beta )t\eta _0^2\! +\! (Lt\! +\! \beta )\eta _0^3. \end{array}  \]
</div>
<p> Suppose function \(\phi \) has zeros on \((\eta _0,R)\), and let \(R_0\) be the smallest such zero. Define </p>
<div class="displaymath" id="a0000000014">
  \[  \alpha =(LR_0+\beta )R_0.  \]
</div>
<p> Then, we have that </p>
<div class="displaymath" id="a0000000015">
  \[  \alpha \in (0,1).  \]
</div>
<p> Assume further that </p>
<div class="equation" id="eq2.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.2} \begin{array}{lll} (LR_0+\beta )\eta _0^2\le 4R_0^2\beta (1-\alpha )^2 \end{array} \end{equation}
  </div>
  <span class="equation_label">2.13</span>
</p>
</div>
<p> and </p>
<div class="equation" id="eq2.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.3} \begin{array}{lll} (LR_0+\beta )\eta _0^2< 2R_0(1-\alpha )^2. \end{array} \end{equation}
  </div>
  <span class="equation_label">2.14</span>
</p>
</div>
<p> By the definition of \(R_0\), we have that </p>
<div class="equation" id="eq2.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.4} b=\tfrac {2R_0(1-\alpha )(LR_0+\beta )\eta _0}{2R_0(1-\alpha )^2-(LR_0+\beta )\eta _0^2}=1-\tfrac {\eta _0}{R_0} \in (0,1). \end{equation}
  </div>
  <span class="equation_label">2.15</span>
</p>
</div>
<p> We shall refer to \((C_1)\), \((C_2)\), \((C_3)^\prime \), \((C_4)^\prime \), (2.1), (2.2), (2.3) and the existence of \(R_0\) on \((\eta _0,R)\) as the \((C)\) conditions. Let \(U(x,R)\), \(\overline{U}(x,R)\) stand, respectively, for the open and closed balls in \(X\) with center \(x\) and radius \(R{\gt}0\). Then, we can show the following semi-local convergence result for Halley’s method (1.2). </p>
<p><div class="theorem_thmwrapper " id="a0000000016">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3</span>
  </div>
  <div class="theorem_thmcontent">
  <p>Let \(F:D\subset X\rightarrow Y\) be continuously twice Fréchet differentiable, where \(X\), \(Y\) are Banach spaces and \(D\) is open and convex. Suppose the <span class="rm">(C)</span> conditions and \(\overline{U}(x_0,R)\subset D\). Then, the Halley sequence \(\{ x_n\} \) generated by <span class="rm">(1.2)</span> is well defined, remains in \(U(x_0,R_0)\) for all \(n\ge 0\) and converges to a solution \(x^\star \in \overline{U}(x_0,R_0)\) of equation \(F(x)=0\) . Furthermore, \(x^\star \) is the only solution limit point of equation \(F(x)=0\) in \(\overline{U}(x_0,R)\). Moreover, the following error estimate holds for any \(n\ge 1\) </p>
<div class="equation" id="eq2.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.5} \| x_{n+2}-x_{n+1}\| \le \tfrac {(LR_0+\beta )\| x_{n+1}-x_n\| ^2}{(1-\alpha )\Big[1-\tfrac {(LR_0+\beta )\| x_{n+1}-x_n\| ^2}{2R_0(1-\alpha )^2}\Big]}\le b\| x_{n+1}-x_n\| . \end{equation}
  </div>
  <span class="equation_label">2.16</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000017">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>We shall show using induction that (2.5) and the following hold for \(n\ge 0\): </p>
<div class="equation" id="eq2.6">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.6} \begin{array}{lll} \| (I-L_F(x_{n+1}))^{-1}\| \le \tfrac {1}{1-\| L_F(x_{n+1})\| }, \end{array} \end{equation}
  </div>
  <span class="equation_label">2.17</span>
</p>
</div>
<div class="equation" id="eq2.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.7} \begin{array}{lll} x_{n+2}\in U(x_0,R_0), \end{array} \end{equation}
  </div>
  <span class="equation_label">2.18</span>
</p>
</div>
<div class="equation" id="eq2.8">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.8} \begin{array}{lll} \| F^\prime (x_{n+1})^{-1}F^\prime (x_0)\| \le \tfrac {1}{1-(L\| x_{n+1}-x_0\| +\beta )\| x_{n+1}-x_0\| }<\tfrac {1}{1-\alpha }, \end{array} \end{equation}
  </div>
  <span class="equation_label">2.19</span>
</p>
</div>
<div class="equation" id="eq2.9">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.9} \begin{array}{lll} \| F^\prime (x_0)^{-1}F^{\prime \prime }(x_{n+1})\| \le L\| x_{n+1}-x_0\| +\beta < LR_0+\beta <\tfrac {1}{R_0}, \end{array} \end{equation}
  </div>
  <span class="equation_label">2.20</span>
</p>
</div>
<div class="displaymath" id="eq2.10">
  \begin{align} \label{eq2.10} \| L_F(x_{n+1})\| & \le \tfrac {(LR_0+\beta )\| x_{n+1}-x_n\| ^2}{2R_0[1-(L\| x_{n+1}-x_0\| +\beta )\| x_{n+1}-x_0\| ]^2}\\ & \le \tfrac {LR_0+\beta }{2R_0(1-\alpha )^2}\| x_{n+1}-x_n\| ^2{\lt}1\nonumber , \end{align}
</div>
<div class="equation" id="eq2.11">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.11} \begin{array}{lll} \tfrac {\| L_F(x_{n+1})\| }{2R_0}\le \beta . \end{array} \end{equation}
  </div>
  <span class="equation_label">2.22</span>
</p>
</div>
<p> We have </p>
<div class="displaymath" id="a0000000018">
  \begin{align}  \| I-(I-L_F(x_0))\| & =\| L_F(x_0)\| =\tfrac {1}{2}\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)F^\prime (x_0)^{-1}F(x_0)\| \nonumber \\ & \le \tfrac {1}{2}\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)\| \| F^\prime (x_0)^{-1}F(x_0)\| \! \le \! \tfrac {1}{2}\beta \eta \! =\! a\! {\lt}\! 1\label{eq2.12}. \end{align}
</div>
<p> It follows from (2.12) and the Banach lemma on invertible operators [2], [6] that \((I-L_F(x_0))^{-1}\) exists, so that </p>
<div class="displaymath" id="a0000000019">
  \[  \| (I-L_F(x_0))^{-1}\| \le \tfrac {1}{1-\| L_F(x_0)\| }\le \tfrac {1}{1-a}  \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000020">
  \begin{align*}  \| x_1-x_0\| & =\| (I-L_F(x_0))^{-1}F^\prime (x_0)^{-1}F(x_0)\| \\ & \le \| (I-L_F(x_0))^{-1}\| \| F^\prime (x_0)^{-1}F(x_0)\| \\ & \le \tfrac {\eta }{1-a}=\eta _0{\lt}R_0. \end{align*}
</div>
<p> We need an estimate on </p>
<div class="displaymath" id="a0000000021">
  \begin{align*} & \| I-F^\prime (x_0)^{-1}F^\prime (x_1)\| =\\ & =\Big\| F^\prime (x_0)^{-1}\int ^1_0 F^{\prime \prime }(x_0+\theta (x_1-x_0))(x_1-x_0){\rm d}\theta \Big\| \\ & =\Big\| F^\prime (x_0)^{-1}\int ^1_0 [F^{\prime \prime }(x_0+\theta (x_1-x_0))-F^{\prime \prime }(x_0)](x_1-x_0){\rm d}\theta \\ & \quad +F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)(x_1-x_0)\Big\| \\ & \le \int ^1_0\| F^\prime (x_0)^{-1} [F^{\prime \prime }(x_0+\theta (x_1-x_0))-F^{\prime \prime }(x_0)](x_1-x_0){\rm d}\theta \| \\ & \quad +\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)(x_1-x_0)\| \\ & \le \int ^1_0 L\theta \| x_1-x_0\| ^2d\theta +\beta \| x_1-x_0\| =(\tfrac {L}{2}\| x_1-x_0\| +\beta )\| x_1-x_0\| \\ & {\lt}(LR_0+\beta )R_0=\alpha {\lt}1. \end{align*}
</div>
<p>Hence, \(F^\prime (x_1)^{-1}\) exists and </p>
<div class="displaymath" id="a0000000022">
  \[  \begin{array}{lll} \| F^\prime (x_1)^{-1}F^\prime (x_0)\| \le \tfrac {1}{1-\big(\tfrac {L}{2}\| x_1-x_0\| +\beta \big)\| x_1-x_0\| }\le \tfrac {1}{1-(L\| x_1-x_0\| +\beta )\| x_1-x_0\| }{\lt} \tfrac {1}{1-\alpha }. \end{array}  \]
</div>
<p> In view of Halley’s iteration we can write </p>
<div class="displaymath" id="a0000000023">
  \[  [I-L_F(x_0)](x_1-x_0)+F^\prime (x_0)^{-1}F(x_0)=0  \]
</div>
<p> or </p>
<div class="displaymath" id="a0000000024">
  \[  F(x_0)+F^\prime (x_0)(x_1-x_0)-\tfrac {1}{2}F^{\prime \prime }(x_0)F^\prime (x_0)^{-1}F(x_0)(x_1-x_0)=0.  \]
</div>
<p> It then follows from the integral form of the mean theorem that </p>
<div class="displaymath" id="a0000000025">
  \[  \| F^\prime (x_0)^{-1}[F(x_1)-F(x_0)-F^\prime (x_0)(x_1-x_0)-\tfrac {1}{2}F^{\prime \prime }(x_0)(x_1-x_0)^2]\| \le \tfrac {L}{6}\| x_1-x_0\| ^3  \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000026">
  \[  \| \tfrac {1}{2}F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)[F^\prime (x_0)^{-1}F(x_0)\! +\! (x_1\! -\! x_0)](x_1\! -\! x_0)\| \le \tfrac {\beta }{2}\| L_F(x_0)\| \| x_1\! -\! x_0\| ^2\! .\!   \]
</div>
<p> Hence, we get that </p>
<div class="displaymath" id="a0000000027">
  \begin{align*} & \| F^\prime (x_0)^{-1}F(x_1)\| =\\ & =\| F^\prime (x_0)^{-1}[F(x_1)-F(x_0)-F^\prime (x_0)(x_1-x_0)-\tfrac {1}{2}F^{\prime \prime }(x_0)(x_1-x_0)^2]\\ & \quad +\tfrac {1}{2}F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)[F^\prime (x_0)^{-1}F(x_0)+(x_1-x_0)](x_1-x_0)\| \\ & \le (\tfrac {L}{6}\| x_1-x_0\| +\tfrac {\beta }{2}\| L_F(x_0)\| )\| x_1-x_0\| ^2\le (LR_0+\beta )\| x_1-x_0\| ^2\\ & \le (LR_0+\beta )R_0\| x_1-x_0\| \le \alpha \eta _0 \end{align*}
</div>
<p> and </p>
<div class="displaymath" id="a0000000028">
  \[  \begin{array}{lll} \| F^\prime (x_0)^{-1}F^{\prime \prime }(x_1)\| & \le & \| F^\prime (x_0)^{-1}(F^{\prime \prime }(x_1)-F^{\prime \prime }(x_0))\| +\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)\| \\ & \le & L\| x_1-x_0\| +\beta {\lt} LR_0+\beta {\lt}\tfrac {1}{R_0}. \end{array}  \]
</div>
<p> Hence, we get that </p>
<div class="displaymath" id="a0000000029">
  \begin{align*} & \| L_F(x_1)\| =\\ & =\tfrac {1}{2}\| F^\prime (x_1)^{-1}F^\prime (x_0)F^\prime (x_0)^{-1}F^{\prime \prime }(x_1)F^\prime (x_1)^{-1}F^\prime (x_0)F^\prime (x_0)^{-1}F(x_1)\| \\ & \le \tfrac {1}{2}\| F^\prime (x_1)^{-1}F^\prime (x_0)\| ^2\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_1)\| \| F^\prime (x_0)^{-1}F(x_1)\| \\ & \le \tfrac {(LR_0+\beta )\| x_1-x_0\| ^2}{2R_0[1-\| x_1-x_0\| (L\| x_1-x_0\| +\beta )]^2} \le \tfrac {(LR_0+\beta )\| x_1-x_0\| ^2}{2R_0(1-\alpha )^2}\le \tfrac {(LR_0+\beta )\eta _0^2}{2R_0(1-\alpha )^2}{\lt}1 \end{align*}
</div>
<p> and </p>
<div class="displaymath" id="a0000000030">
  \[  \tfrac {1}{2R_0}\| L_F(x_1)\| \le \beta  \]
</div>
<p> by (2.2) and (2.3). Then, \((I-L_F(x_1))^{-1}\) exists, </p>
<div class="displaymath" id="a0000000031">
  \[  \| (I-L_F(x_1))^{-1}\| \le \tfrac {1}{1-\| L_F(x_1)\| }.  \]
</div>
<p> So, \(x_2\) is well defined, and using (1.2) and (2.4) we get </p>
<div class="displaymath" id="a0000000032">
  \begin{align*}  \| x_2-x_1\| & \le \frac{\| F^\prime (x_1)^{-1}F^\prime (x_0)\| \| F^\prime (x_0)^{-1}F(x_1)\| }{1-\| L_F(x_1)\| }\\ & \le \frac{(LR_0+\beta )\| x_1-x_0\| ^2}{(1-\alpha )(1-\frac{(LR_0+\beta )\| x_1-x_0\| ^2}{2R_0(1-\alpha )^2})}\le b\| x_1-x_0\| . \end{align*}
</div>
<p> Therefore, we have that </p>
<div class="displaymath" id="a0000000033">
  \begin{align*}  \| x_2-x_0\| & \le \| x_2-x_1\| +\| x_1-x_0\| \\ & \le b\| x_1-x_0\| +\| x_1-x_0\| =(1+b)\| x_1-x_0\| \\ & =\tfrac {1-b^2}{1-b}\| x_1-x_0\| \le \tfrac {\| x_1-x_0\| }{1-b}\le \tfrac {\eta _0}{1-b}=R_0{\lt}R. \end{align*}
</div>
<p> Hence, we have \(x_2\in U(x_0,R_0)\). The rest will be shown by induction. Assume (2.5)-(2.11) are true for all natural integers \(n\le k\), where \(k\ge 0\) is a fixed integer. Then we have that </p>
<div class="displaymath" id="a0000000034">
  \begin{align*} & \| I-F^\prime (x_0)^{-1}F^\prime (x_{k+2})\| =\\ & =\Big\| F^\prime (x_0)^{-1}\int ^1_0F^{\prime \prime }(x_0+\theta (x_{k+2}-x_0))(x_{k+2}-x_0){\rm d}\theta \Big\| \\ & =\Big\| F^\prime (x_0)^{-1}\int ^1_0[F^{\prime \prime }(x_0+\theta (x_{k+2}-x_0))-F^{\prime \prime }(x_0)](x_{k+2}-x_0){\rm d}\theta \\ & \quad +F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)(x_{k+2}-x_0)\Big\| \\ & \le \int ^1_0\| F^\prime (x_0)^{-1}[F^{\prime \prime }(x_0+\theta (x_{k+2}-x_0))-F^{\prime \prime }(x_0)](x_{k+2}-x_0)\| {\rm d}\theta \\ & \quad +\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_0)\| \| x_{k+2}-x_0\| \\ & \le \int ^1_0 L\theta \| x_{k+2}-x_0\| ^2{\rm d}\theta +\beta \| x_{k+2}-x_0\| \\ & \le (L\| x_{k+2}-x_0\| +\beta )\| x_{k+2}-x_0\| {\lt}(LR_0+\beta )R_0=\alpha {\lt}1. \end{align*}
</div>
<p> Hence, \(F^\prime (x_{k+2})^{-1}\) exists and </p>
<div class="displaymath" id="a0000000035">
  \[  \| F^\prime (x_{k+2})^{-1}F^\prime (x_0)\| \le \tfrac {1}{1-(L\| x_{k+2}-x_0\| +\beta )\| x_{k+2}-x_0\| }{\lt}\tfrac {1}{1-\alpha }.  \]
</div>
<p> Next, we shall estimate \(\| F^\prime (x_0)^{-1}F(x_{k+2})\| \). We have that </p>
<div class="displaymath" id="a0000000036">
  \begin{align*} & F(x_{k+2})=\\ & =F(x_{k+2})-F(x_{k+1})-F^\prime (x_{k+1})(x_{k+2}-x_{k+1})\\ & \quad +\tfrac {1}{2}F^{\prime \prime }(x_{k+1})F^\prime (x_{k+1})^{-1}F(x_{k+1})(x_{k+2}-x_{k+1})\\ & =F(x_{k+2})-F(x_{k+1})-F^\prime (x_{k+1})(x_{k+2}-x_{k+1})-\tfrac {1}{2}F^{\prime \prime }(x_{k+1})(x_{k+2}-x_{k+1})^2\\ & \quad +\tfrac {1}{2}F^{\prime \prime }(x_{k+1})[F^\prime (x_{k+1})^{-1}F(x_{k+1})+(x_{k+2}-x_{k+1})](x_{k+2}-x_{k+1}). \end{align*}
</div>
<p> Hence, we get that </p>
<div class="displaymath" id="a0000000037">
  \begin{align*} & \| F^\prime (x_0)^{-1}F(x_{k+2})\| \le A_1+A_2=\\ & =|F^\prime (x_0)^{-1}[F(x_{k+2})-F(x_{k+1})-F^\prime (x_{k+1})(x_{k+2}-x_{k+1})\\ & \quad -\tfrac {1}{2}F^{\prime \prime }(x_{k+1})(x_{k+2}-x_{k+1})^2\| \\ & \quad +\tfrac {1}{2}\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_{k+1})[F^\prime (x_{k+1})^{-1}F(x_{k+1})\\ & \quad +(x_{k+2}-x_{k+1})](x_{k+2}-x_{k+1})]\| . \end{align*}
</div>
<p> We have in turn that </p>
<div class="displaymath" id="a0000000038">
  \begin{align*} & A_1=\\ & =\! \Big\| F^\prime (x_0)\! ^{-1}\! \! \int ^1_0\! \int ^1_0[F^{\prime \prime }(x_{k\! +\! 1}\! +\! s\theta (x_{k\! +\! 2}\! -\! x_{k\! +\! 1}))\! -\! F^{\prime \prime }(x_{k\! +\! 1})](x_{k\! +\! 2}\! -\! x_{k\! +\! 1})^2\theta {\rm d} s{\rm d}\theta \Big\| \\ & \! =\! \Big\| F^\prime (x_0)\! ^{-1}\int ^1_0\! \int ^1_0[F^{\prime \prime }(x_{k\! +\! 1}\! +\! s\theta (x_{k\! +\! 2}-x_{k\! +\! 1}))-F^{\prime \prime }(x_0)](x_{k\! +\! 2}\! -\! x_{k\! +\! 1})^2\theta {\rm d}s{\rm }d\theta \\ & \quad +F^\prime (x_0)\! ^{-1}\int ^1_0\int ^1_0[F^{\prime \prime }(x_0)-F^{\prime \prime }(x_{k+1})](x_{k+2}-x_{k+1})^2\theta {\rm d}s {\rm }d\theta \Big\| \\ & \le \int ^1_0\! \int ^1_0\| F^\prime (x_0)\! ^{-1}[F^{\prime \prime }(x_{k\! +\! 1}\! +\! s\theta (x_{k\! +\! 2}\! -\! x_{k\! +\! 1}))\! -\! F^{\prime \prime }(x_0)]\| \| x_{k\! +\! 2}\! -\! x_{k\! +\! 1}\| ^2\theta {\rm d}s {\rm }d\theta \\ & \quad +\int ^1_0\int ^1_0\| F^\prime (x_0)^{-1}[F^{\prime \prime }(x_0)-F^{\prime \prime }(x_{k+1})]\| \| x_{k+2}-x_{k+1}\| ^2\theta {\rm d}s {\rm d}\theta \\ & \le \int ^1_0\int ^1_0L\| x_{k+1}+s\theta (x_{k+2}-x_{k+1})-x_0\| \| x_{k+2}-x_{k+1}\| ^2\theta {\rm d} s{\rm d}\theta \\ & \quad +\int ^1_0\int ^1_0L\| x_{k+1}-x_0\| \| x_{k+2}-x_{k+1}\| ^2\theta {\rm d}s{\rm d}\theta \\ & \le \! \! \Big[\! \! \int ^1_0\! \! \int ^1_0L(s\theta \| x_{k\! +\! 2}\! -\! x_0\| \! \! +\! \! (1\! \! -\! \! s\theta )\| x_{k\! +\! 1}\! -\! x_0\| \! +\! \| x_{k\! +\! 1}\! -\! x_0\| )\theta {\rm d}s{\rm d}\theta \Big]\! \| x_{k\! +\! 2}\! -\! x_{k\! +\! 1}\| \! ^2= \end{align*}
</div>
<div class="displaymath" id="a0000000039">
  \begin{align*} & =(\tfrac {L}{6}\| x_{k+2}-x_0\| +\tfrac {L}{3}\| x_{k+1}-x_0\| +\tfrac {L}{2}\| x_{k+1}-x_0\| )\| x_{k+2}-x_{k+1}\| ^2\\ & \le LR_0\| x_{k+2}-x_{k+1}\| ^2 \end{align*}
</div>
<p> and </p>
<div class="displaymath" id="a0000000040">
  \[  \begin{array}{lll} A_2& =\tfrac {1}{2}\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_{k+1})\big(-[I-L_F(x_{k+1})](x_{k+2}-x_{k+1})\\ & \quad +(x_{k+2}-x_{k+1})\big)(x_{k+2}-x_{k+1})\| \\ & =\tfrac {1}{2}\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_{k+1})L_F(x_{k+1})(x_{k+2}-x_{k+1})^2\| \\ & \le \tfrac {1}{2R_0}\| L_F(x_{k+1})\| \| x_{k+2}-x_{k+1}\| ^2. \end{array}  \]
</div>
<p> Hence, summing up we get that </p>
<div class="equation" id="eq2.13">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.13} \begin{array}{lll} \| F^\prime (x_0)^{-1}F(x_{k+2})\| & \le &  (LR_0+\tfrac {1}{2R_0}\| L_F(x_{k+1})\| )\| x_{k+2}-x_{k+1}\| ^2\\ & \le & (LR_0+\beta )\| x_{k+2}-x_{k+1}\| ^2 \end{array} \end{equation}
  </div>
  <span class="equation_label">2.24</span>
</p>
</div>
<p> and </p>
<div class="displaymath" id="a0000000041">
  \[  \begin{array}{lll} \| L_F(x_{k+2})\| & \le & \tfrac {1}{2R_0}\| F^\prime (x_{k+2})^{-1}F^\prime (x_0)\| ^2\| F^\prime (x_0)^{-1}F(x_{k+2})\| \\ & \le & \tfrac {(LR_0+\beta )\| x_{k+2}-x_{k+1}\| ^2}{2R_0(1-\alpha )^2}\le \tfrac {(LR_0+\beta )\eta _0^2}{2R_0(1-\alpha )^2}{\lt}1. \end{array}  \]
</div>
<p> Hence, \((I-L_F(x_{k+2}))^{-1}\) exists and </p>
<div class="displaymath" id="a0000000042">
  \[  \| (I-L_F(x_{k+2}))^{-1}\| \le \tfrac {1}{1-\| L_F(x_{k+2})\| }.  \]
</div>
<p> Therefore, \(x_{k+3}\) is well defined. Moreover, we obtain that </p>
<div class="displaymath" id="a0000000043">
  \begin{align*} & \| x_{k+3}-x_{k+2}\| \le \\ & \le \| (I-L_F(x_{k+2}))^{-1}\| \| F^\prime (x_{k+2})^{-1}F^\prime (x_0)\| \| F^\prime (x_0)^{-1}F(x_{k+2})\| \\ & \le \tfrac {(LR_0+\beta )\| x_{k+2}-x_{k+1}\| ^2}{\Big[1-\tfrac {(LR_0+\beta )\| x_{k+2}-x_{k+1}\| ^2}{2R_0(1-\alpha )^2}\Big][1-\| x_{k+2}-x_0\| (L\| x_{k+2}-x_0\| +\beta )]}\\ & \le \tfrac {(LR_0+\beta )\| x_{k+2}-x_{k+1}\| ^2}{\Big[1-\tfrac {(LR_0+\beta )\eta _0^2}{2R_0(1-\alpha )^2}\Big][1-R_0(LR_0+\beta )]}\\ & \le \tfrac {(LR_0+\beta )\eta _0}{\Big[1-\tfrac {(LR_0+\beta )\eta _0^2}{2R_0(1-\alpha )^2}\Big](1-\alpha )}\| x_{k+2}-x_{k+1}\| \le b\| x_{k+2}-x_{k+1}\| . \end{align*}
</div>
<p>Furthermore, we have that </p>
<div class="displaymath" id="a0000000044">
  \begin{align*} & \| x_{k+3}-x_0\| \le \\ & \le \| x_{k+3}-x_{k+2}\| +\| x_{k+2}-x_{k+1}\| +\cdots +\| x_1-x_0\| \\ & \le (b^{k+2}+b^{k+1}+\cdots +1)\| x_1-x_0\| \\ & =\tfrac {1-b^{k+3}}{1-b}{\lt}\tfrac {\eta _0}{1-b}=R_0. \end{align*}
</div>
<p> Hence, we deduce that \(x_{k+3}\in U(x_0,R_0)\) </p>
<p>Let \(m\) be a natural integer. Then, we have that </p>
<div class="displaymath" id="a0000000045">
  \begin{align*} & \| x_{k+m}-x_k\| \le \\ & \le \| x_{k+m}-x_{k+m-1}\| +\| x_{k+m-1}-x_{k+m-2}\| +\cdots +\| x_{k+1}-x_k\| \\ & \le (b^{m-1}+\cdots +b+1)\| x_{k+1}-x_k\| \\ & \le \tfrac {1-b^m}{1-b}b^k\| x_1-x_0\| . \end{align*}
</div>
<p> It follows that \(\{ x_k\} \) is Cauchy in a Banach space \(X\) and as such it converges to some \(x^\star \in \overline{U}(x_0,R_0)\) (since \(\overline{U}(x_0,R_0)\) is a closed set). By letting \(k\rightarrow \infty \) in (2.13) we obtain \(F(x^\star )=0\). We also have </p>
<div class="displaymath" id="a0000000046">
  \[  \| x^\star -x_k\| \le \tfrac {b^k}{1-b}\| x_1-x_0\| .  \]
</div>
<p> To show the uniqueness part, let \(y^\star \) be a solution equation \(F(x)=0\) in \(\overline{U}(x_0,R_0)\). Let \(T=\int ^1_0F^\prime (x_0)^{-1}F^\prime (x^\star +\theta (y^\star -x^\star )){\rm d}\theta \). We have in turn that </p>
<div class="displaymath" id="a0000000047">
  \begin{align*} & \| I-T\| =\\ & =\Big\| \int ^1_0F^\prime (x_0)^{-1}[F^\prime (x^\star +\theta (y^\star -x^\star ))-F^\prime (x_0)]{\rm d}\theta \Big\| \\ & =\Big\| \int ^1_0\int ^1_0F^\prime (x_0)^{-1}F^{\prime \prime }(x_0\! +\! s(x^\star +\theta (y^\star \! -\! x^\star )\! -\! x_0))(x^\star \! +\! \theta (y^\star \! -\! x^\star )\! -\! x_0){\rm d}s{\rm d}\theta \Big\| \\ & \le \int ^1_0\int ^1_0\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_0+s(x^\star +\theta (y^\star -x^\star )-x_0))\| {\rm d}s\\ & \quad \cdot ((1-\theta )\| x^\star -x_0\| +\theta \| y^\star -x_0\| ){\rm d}\theta \\ & {\lt}R_0\int ^1_0\int ^1_0\| F^\prime (x_0)^{-1}F^{\prime \prime }(x_0+s(x^\star +\theta (y^\star -x^\star )-x_0))\| {\rm d}s{\rm d}\theta \\ & \le R_0\int ^1_0\int ^1_0(L\| s((x^\star +\theta (y^\star -x^\star ))-x_0)\| +\beta ){\rm d}s{\rm d}\theta \\ & =R_0\int ^1_0(\tfrac {1}{2}L\| (1-\theta )(x^\star -x_0)+\theta (y^\star -x_0)\| +\beta ){\rm d}\theta \\ & {\lt}R_0(LR_0+\beta )=\alpha {\lt}1. \end{align*}
</div>
<p> It follows that \(T^{-1}\) exists. Using the identity </p>
<div class="displaymath" id="a0000000048">
  \[  0=F^\prime (x_0)^{-1}(F(y^\star )-F(x^\star ))=F^\prime (x_0)^{-1}T(y^\star -x^\star )  \]
</div>
<p> we deduce \(y^\star =x^\star \). The proof of the theorem is complete. </p>
<p><div class="remark_thmwrapper " id="a0000000049">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">4</span>
  </div>
  <div class="remark_thmcontent">
  <p>The conclusion of Theorem 2.1 holds in an another setting, where the conditions can be weaker. Indeed, let us introduce center-Lipschitz condition </p>
<div class="displaymath" id="a0000000050">
  \[  \| F^\prime (x_0)^{-1}(F^\prime (x)-F^\prime (x_0)\| \le L_0\| x-x_0\| \  \text{for all}\  x\in D.  \]
</div>
<p> Then, it follows from the proof of Theorem 2.1 that \(\alpha ,R,b\) can be replaced by \(\alpha _1,R_1,b_1\), where </p>
<div class="displaymath" id="a0000000051">
  \[  \alpha _1=L_0R_0,\quad R_1=\tfrac {1}{L_0},\quad 0{\lt}b_1{\lt}1-\tfrac {\eta }{R_1}.  \]
</div>
<p> It is possible that </p>
<div class="equation" id="eq2.14">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.14} L_0<LR_0+\beta \  \text{and} \  R_1>R. \end{equation}
  </div>
  <span class="equation_label">2.25</span>
</p>
</div>
<p> The proof of Theorem 2.1 goes through with \(\alpha _1\) replacing \(\alpha \) and the results are finer in this case, since </p>
<div class="displaymath" id="a0000000052">
  \[  \tfrac {1}{1-\alpha _1}{\lt}\tfrac {1}{1-\alpha }.  \]
</div>
<p> As an example, let us define polynomial \(f\) on \(D=\overline{U}(1,1-p)\) by </p>
<div class="displaymath" id="a0000000053">
  \[  f(x)=x^3-p,  \]
</div>
<p> where \(p\in [2-\sqrt{3},1)\). Then, we have \(\beta =L=2\), \(\eta =\tfrac {1-p}{3}\) and \(L_0=3-p\). Estimate (2.14) holds provided that \(b\) is chosen so that </p>
<div class="displaymath" id="a0000000054">
  \[  \tfrac {p}{2+p}{\lt}b{\lt}1-\tfrac {1-p}{2+p}(1+\sqrt{3}),  \]
</div>
<p> where </p>
<div class="displaymath" id="a0000000055">
  \[  \tfrac {p}{2+p}{\lt}1-\tfrac {1-p}{2+p}(1+\sqrt{3})  \]
</div>
<p> by the choice of \(p\). Note also that \(R_1{\gt}R\) and \(1-\tfrac {\eta _0}{R_1}{\gt}1-\tfrac {\eta _0}{R}\). The uniqueness of the solution can be shown in larger ball \(U(x_0,R_1)\), since </p>
<div class="displaymath" id="a0000000056">
  \[  \begin{array}{lll} \| F^\prime (x_0)^{-1}(T-F^\prime (x_0))\| & \le &  L_0\int ^1_0\| x^\star +\theta (y^\star -x^\star )-x_0\| {\rm d}\theta \\ & \le & \tfrac {L_0}{2}(\| x^\star -x_0\| +\| y^\star -x_0\| )\\ & {\lt}& \tfrac {L_0}{2}(R_0+R_0){\lt}L_0R{\lt}L_0R_1=1. \end{array}  \]
</div>
<p><span class="qed">â–¡</span></p>

  </div>
</div> </p>
<h1 id="a0000000057">3 Local convergence of Halley’s method</h1>
<p> In this section we present the local convergence of Halley’s method (1.2). Let \(c\ge 0\), \(d\ge 0\) and \(l{\gt}0\). It is convenient for us to define polynomial \(p_0\) on interval \([0,+\infty )\) by </p>
<div class="equation" id="eq3.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.1} p_0(t)=(c+dt)(1+\tfrac {l}{2}t)t-2(1-lt)^2. \end{equation}
  </div>
  <span class="equation_label">3.26</span>
</p>
</div>
<p> We have \(p_0(0)=-2{\lt}0\) and \(p_0(\tfrac {1}{l})=(c+\tfrac {d}{l})(1+\tfrac {1}{2})\tfrac {1}{l}{\gt}0\). It follows from the intermediate value theorem that there exists a root of polynomial \(p_0\) in \((0,\tfrac {1}{l})\). Denote by \(r_0\) the smallest such root. Moreover, define functions \(g\) and \(h\) on \([0,r_0)\) by </p>
<div class="equation" id="eq3.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.2} g(t)=\tfrac {(c+dt)(1+\tfrac {l}{2}t)t}{2(1-lt)^2} \end{equation}
  </div>
  <span class="equation_label">3.27</span>
</p>
</div>
<p> and </p>
<div class="equation" id="eq3.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.3} h(t)=(1-g(t))^{-1}. \end{equation}
  </div>
  <span class="equation_label">3.28</span>
</p>
</div>
<p> Note that functions \(g\) and \(h\) are well defined on \([0,r_0)\) and </p>
<div class="equation" id="eq3.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.4} g(t)\in [0,1) \  \text{for each}\  t\in [0,r_0). \end{equation}
  </div>
  <span class="equation_label">3.29</span>
</p>
</div>
<p> Define polynomial \(p_1\) on \([0,+\infty )\) by </p>
<div class="equation" id="eq3.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.5} p_1(t)=[10d(1-lt)+(2dt+3c)(c+dt)]t^2-6[2(1-lt)^2-(c+dt)(1+\tfrac {l}{2}t)t]. \end{equation}
  </div>
  <span class="equation_label">3.30</span>
</p>
</div>
<p> We get \(p_1(0)=-12\) and \(p_1(\tfrac {1}{l})=(\tfrac {2d}{l}+3c)(c+\tfrac {d}{l})\tfrac {1}{l^2}+6(c+\tfrac {d}{l})(1+\tfrac {1}{2})\tfrac {1}{l}{\gt}0\). Hence, there exists \(r_1\in (0,\tfrac {1}{l})\) such that \(p_1(r_1)=0\). Set </p>
<div class="equation" id="eq3.6">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.6} r=\min \{ r_0,r_1\} . \end{equation}
  </div>
  <span class="equation_label">3.31</span>
</p>
</div>
<p> Then, function \(q\) given by </p>
<div class="equation" id="eq3.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.7} q(t)=\tfrac {1}{12}\tfrac {h(t)}{1-lt}(10d+\tfrac {(2dt+3c)(c+dt)}{1-lt})t^2 \end{equation}
  </div>
  <span class="equation_label">3.32</span>
</p>
</div>
<p> is well defined on \([0,r)\) and </p>
<div class="equation" id="eq3.8">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.8} q(t)\in [0,1)\  \text{for each}\  t\in [0,r). \end{equation}
  </div>
  <span class="equation_label">3.33</span>
</p>
</div>
<p>We shall show the local convergence of Halley’s method using the conditions (H) given by <br />\((H_1)\) there exists \(x^\star \in D\) such that \(F^\prime (x^\star )\in L(Y,X)\) and \(F(x^\star )=0\);<br />\((H_2)\) \(\| F^\prime (x^\star )^{-1}(F^\prime (x)-F^\prime (x^\star ))\| \le l\| x-x^\star \| \) for each \(x\in D\);<br />\((H_3)\)  \(\| F^\prime (x^\star )^{-1}F^{\prime \prime }(x^\star )\| \le c\);<br />\((H_4)\)   \(F^\prime (x^\star )^{-1}(F^{\prime \prime }(x)-F^{\prime \prime }(x^\star ))\| \le d\| x-x^\star \| \) for each \(x\in D\) <br />and<br />\((H_5)\)  \(U(x^\star ,r)\subseteq D\). <br />Then, we can show: <div class="theorem_thmwrapper " id="a0000000058">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">5</span>
  </div>
  <div class="theorem_thmcontent">
  <p>Suppose that the \((H)\) conditions hold. Then, sequence \(\{ x_n\} \) generated by Halley’s method starting from \(x_0\in U(x^\star ,r)\) is well defined, remains in \(U(x^\star ,r)\) for all \(n\ge 0\) and converges to \(x^\star \). Moreover, the following estimates hold </p>
<div class="equation" id="eq3.9">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.9} \| x_{n+1}-x^\star \| \le e_n\| x_n-x^\star \| ^3\  \text{for each} n=0,1,2,\ldots , \end{equation}
  </div>
  <span class="equation_label">3.34</span>
</p>
</div>
<p> where </p>
<div class="equation" id="eq3.10">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.10} e_n=\tfrac {1}{12}\tfrac {h(\| x_n-x^\star \| )}{1-l\| x_n-x^\star \| }(10d+\tfrac {(2d\| x_n-x^\star \| +3c)(c+d\| x_n-x^\star \| )}{1-l\| x_n-x^\star \| }). \end{equation}
  </div>
  <span class="equation_label">3.35</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000059">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>We have for \(x\in U(x^\star ,r)\), the choice of \(r\) and \((H_2)\) that </p>
<div class="equation" id="eq3.11">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.11} \| F^\prime (x^\star )^{-1}(F^\prime (x)-F^\prime (x^\star ))\| \le l\| x-x^\star \| < lr <1. \end{equation}
  </div>
  <span class="equation_label">3.36</span>
</p>
</div>
<p> It follows from (3.11) and the Banach lemma on invertible operators that \(F^\prime (x)^{-1}\in L(Y,X)\) and </p>
<div class="equation" id="eq3.12">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.12} \| F^\prime (x)^{-1}F^\prime (x^\star )\| \le \tfrac {1}{1-l\| x-x^\star \| }. \end{equation}
  </div>
  <span class="equation_label">3.37</span>
</p>
</div>
<p> Using the definition of operator \(L_F\), function \(g\), radius \(r\), (3.4), (3.12), hypotheses \((H_3)\) and \((H_4)\) we have in turn that </p>
<div class="displaymath" id="a0000000060">
  \begin{align}  \| L_F(x)\| & \le \tfrac {1}{2}\| F^\prime (x)^{-1}F^\prime (x^\star )\| ^2[\| F^\prime (x^\star )^{-1}(F^{\prime \prime }(x)\! -\! F^{\prime \prime }(x^\star )) \! +\! F^\prime (x^\star )^{-1}F^{\prime \prime }(x^\star )\| ]\nonumber \\ & \quad \cdot \Big\| \Big\{ \int ^1_0F^\prime (x^\star )^{-1}[F^\prime (x^\star +\theta (x-x^\star ))-F^\prime (x^\star )]{\rm d}\theta +I \Big\} (x-x^\star )\Big\| \label{eq3.13}\\ & \le \tfrac {1}{2}(\tfrac {1}{1-l\| x-x^\star \| })^2(c+d\| x-x^\star \| )(1+\tfrac {l}{2}\| x-x^\star \| )\| x-x^\star \| \nonumber \\ & =g(\| x-x^\star \| )\le g(r){\lt}1. \nonumber \end{align}
</div>
<p>Hence, we get \(\Gamma _F(x)\) exists and </p>
<div class="equation" id="eq3.14">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.14} \| \Gamma _F(x)\| \le h(\| x-x^\star \| ). \end{equation}
  </div>
  <span class="equation_label">3.39</span>
</p>
</div>
<p> In view of (1.2) and \(F(x^\star )=0\) we obtain the identity (cf [4]) </p>
<div class="displaymath" id="a0000000061">
  \begin{align}  x_{n+1}-x^\star & =\Gamma _F(x_n)F^\prime (x_n)^{-1}F^\prime (x^\star )F^\prime (x^\star )^{-1}\nonumber \\ & \quad \cdot \int ^1_0(1-\theta )[(F^{\prime \prime }(x_n+\theta (x^\star -x_n))-F^{\prime \prime }(x^\star ))\label{eq3.15}\\ & \quad +(F^{\prime \prime }(x^\star )-F^{\prime \prime }(x_n))](x^\star -x_n)^2{\rm d}\theta \nonumber \\ & \quad -\tfrac {1}{2}\Gamma _F(x_n)F^\prime (x_n)^{-1}F^\prime (x^\star )F^\prime (x^\star )^{-1} (F^{\prime \prime }(x_n)-F^{\prime \prime }(x^\star )+F^{\prime \prime }(x^\star ))\nonumber \\ & \quad \cdot \Big[F^\prime (x_n)^{-1}\! \! F^\prime (x^\star )\! F^\prime (x^\star )^{-1}\! \! \! \int ^1_0(1\! -\! \theta ) ((F^{\prime \prime }(x_n\! +\! \theta (x^\star \! -\! x_n))\! -\! F^{\prime \prime }(x^\star ))\nonumber \\ & \quad +F^{\prime \prime }(x^\star ))(x^\star -x_n)^2{\rm d}\theta \Big](x^\star -x_n).\nonumber \end{align}
</div>
<p>Using (3.12), (3.13), (3.14) for \(x=x_n\), (3.15), \((H_3)\), \((H_4)\) and the definition of \(r\) and \(q\) we get that </p>
<div class="displaymath" id="eq3.16">
  \begin{align} & \| x_{n+1}-x^\star \| \le \label{eq3.16}\\ & \le \tfrac {5}{6}\tfrac {dh(\| x_n-x^\star \| )}{1-l\| x_n-x^\star \| }\| x_n-x^\star \| ^3\nonumber \\ & \quad +\tfrac {2d\| x_n-x^\star \| +3c}{12}\tfrac {h(\| x_n-x^\star \| )}{(1-l\| x_n-x^\star \| )^2}(c+d\| x_n-x^\star \| )\| x_n-x^\star \| ^3\nonumber \\ & =e_n\| x_n-x^\star \| ^3=q(\| x_n-x^\star \| )\| x_n-x^\star \| {\lt}\| x_n-x^\star \| .\nonumber \end{align}
</div>
<p> That is \(x_{n+1}\in U(x^\star ,r)\) and \(\lim _{n\rightarrow \infty }x_n=x^\star \). The proof of the theorem is complete. </p>
<p><div class="remark_thmwrapper " id="eq3.17">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">6</span>
  </div>
  <div class="remark_thmcontent">
  <p>It follows from the estimate  </p>
<div class="displaymath" id="a0000000062">
  \begin{align} & \| F^\prime (x^\star )^{-1}(F^\prime (x)-F^\prime (x^\star ))\| =\\ & =\Big\| \int ^1_0F^\prime (x^\star )^{-1}[(F^{\prime \prime }(x^\star +\theta (x-x^\star ))-F^{\prime \prime }(x^\star ))\nonumber \\ & \quad +F^{\prime \prime }(x^\star )](x-x^\star ){\rm d}\theta \Big\| \nonumber \\ & \le (\tfrac {d}{2}\| x-x^\star \| +c)\| x-x^\star \| \nonumber \end{align}
</div>
<p> that condition \((H_2)\) can be dropped from the computation leading to (3.12), which can be replaced by </p>
<div class="displaymath" id="a0000000063">
  \[  \| F^\prime (x)^{-1}F^\prime (x^\star )\| \le \tfrac {1}{1-(\tfrac {d}{2}\| x-x^\star \| +c)\| x-x^\star \| }.  \]
</div>
<p> The rest stays the same. In this case to obtain the corresponding to Theorem 3.1 result simply replace \(l\) by \(m(t)=\tfrac {d}{2}t+c\) and \(\tfrac {1}{l}\) by the only positive root of polynomial </p>
<div class="equation" id="eq3.18">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.18} \begin{array}{lll} p_2(t)=m(t)t-1. \end{array} \end{equation}
  </div>
  <span class="equation_label">3.43</span>
</p>
</div>
<p> This can improve the choice of \(r\) if </p>
<div class="equation" id="eq3.19">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.19} \begin{array}{lll} \tfrac {d}{2}t+c<\tfrac {1}{l}\quad for\; t\in [0,\tfrac {1}{l}). \end{array} \end{equation}
  </div>
  <span class="equation_label">3.44</span>
</p>
</div>
<p><span class="qed">â–¡</span></p>

  </div>
</div> </p>
<h1 id="a0000000064">4 Numerical examples</h1>
<p> In this section, we will give some examples to show the application of our theorem. <div class="example_thmwrapper " id="a0000000065">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">7</span>
  </div>
  <div class="example_thmcontent">
  <p>Let us define a scalar function \(F(x)=x^3-2.25x^2+3x-1.585\) on \(D=(0,3)\) with initial point \(x_0=1\). Then, we have that </p>
<div class="equation" id="eq4.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.1} F^\prime (x)=3x^2-4.5x+3,\quad F^{\prime \prime }(x)=6x-4.5. \end{equation}
  </div>
  <span class="equation_label">4.45</span>
</p>
</div>
<p> So, \(F(x_0)=0.165\), \(F^\prime (x_0)=1.5\), \(F^{\prime \prime }(x_0)=1.5\). We can choose \(\eta =0.11\) and \(\beta =1\) in Theorem 2.1. Moreover, we have for any \(x\in D\) that </p>
<div class="equation" id="eq4.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.2} |F^\prime (x_0)^{-1}[F^{\prime \prime }(x)-F^{\prime \prime }(x_0)]|=4|x-x_0|. \end{equation}
  </div>
  <span class="equation_label">4.46</span>
</p>
</div>
<p> Hence, the weak Lipschitz condition (1.3) is true for constant \(L=4\). By (1.6), we get </p>
<div class="equation" id="eq4.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.3} R=\tfrac {\sqrt{\beta ^2+4L}-\beta }{2L}=\tfrac {\sqrt{17}-1}{8}=0.390388 \ldots . \end{equation}
  </div>
  <span class="equation_label">4.47</span>
</p>
</div>
<p> Then, condition \(\overline{U}(x_0,R)=[x_0-R,x_0+R]\approx [0.609612 ,1.390388]\subset D\) is true. We can also verify that function \(\phi \) has the minimized zero \(R_0=0.169896107\) on \((\eta _0,R)\), and conditions </p>
<div class="displaymath" id="a0000000066">
  \[  \eta =0.11{\lt}\tfrac {R}{1+\tfrac {\beta }{2}}=0.326631635,  \]
</div>
<div class="displaymath" id="a0000000067">
  \[  (LR_0+\beta )\eta _0^2=0.02275745\le 4R_0^2\beta (1-\alpha )^2=0.058966824,  \]
</div>
<div class="displaymath" id="a0000000068">
  \[  (LR_0+\beta )\eta _0^2=0.02275745{\lt}2R_0(1-\alpha )^2=0.029483412  \]
</div>
<p> are satisfied. Hence, all conditions in Theorem 2.1 are satisfied, and our theorem applies.<span class="qed">â–¡</span></p>

  </div>
</div> <div class="example_thmwrapper " id="a0000000069">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">8</span>
  </div>
  <div class="example_thmcontent">
  <p>In this example we provide an application of our results to a special nonlinear Hammerstein integral equation of the second kind. Consider the integral equation </p>
<div class="equation" id="eq4.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.4} u(s)=f(s)+\lambda \int ^{b^\prime }_{a^\prime } k(s,t) u(t)^{2+\tfrac {1}{n}}{\rm d}t,\quad \lambda \in \mathbb {R},n\in \mathbb {N}, \end{equation}
  </div>
  <span class="equation_label">4.48</span>
</p>
</div>
<p> where \(f\) is a given continuous function satisfying \(f(s){\gt}0\) for \(s\in [a^\prime ,b^\prime ]\) and the kernel is continuous and positive in \([a^\prime ,b^\prime ]\times [a^\prime ,b^\prime ]\). </p>
<p>Let \(X=Y=C[a^\prime ,b^\prime ]\) and \(D=\{ u\in C[a^\prime ,b^\prime ]:u(s)\ge 0,s\in [a^\prime ,b^\prime ]\} \). Define \(F:D\rightarrow Y\) by </p>
<div class="equation" id="eq4.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.5} F(u)(s)=u(s)-f(s)-\lambda \int ^{b^\prime }_{a^\prime } k(s,t) u(t)^{2+\tfrac {1}{n}}{\rm d}t,\quad s\in [a^\prime ,b^\prime ]. \end{equation}
  </div>
  <span class="equation_label">4.49</span>
</p>
</div>
<p> We use the max-norm, The first and second derivatives of \(F\) are given by </p>
<div class="equation" id="eq4.6">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.6} F^\prime (u)v(s)=v(s)-\lambda (2+\tfrac {1}{n})\int ^{b^\prime }_{a^\prime } k(s,t) u(t)^{1+\tfrac {1}{n}}v(t){\rm d}t,\quad v\in D, s\in [a^\prime ,b^\prime ], \end{equation}
  </div>
  <span class="equation_label">4.50</span>
</p>
</div>
<p> and </p>
<div class="equation" id="eq4.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.7} F^{\prime \prime }(u)(vw)(s)=-\lambda (1+\tfrac {1}{n})(2+\tfrac {1}{n})\int ^{b^\prime }_{a^\prime } k(s,t) u(t)^{\tfrac {1}{n}}(vw)(t){\rm d}t, \end{equation}
  </div>
  <span class="equation_label">4.51</span>
</p>
</div>
<p> where \(v,w\in D, s\in [a^\prime ,b^\prime ]\), respectively. </p>
<p>Let \(x_0(t)=f(t)\), \(\gamma =\min _{s\in [a^\prime ,b^\prime ]} f(s)\), \(\delta =\max _{s\in [a^\prime ,b^\prime ]} f(s)\) and \(M=\max _{s\in [a^\prime ,b^\prime ]}\int ^{b^\prime }_{a^\prime } |k(s,t)|{\rm dt}\). Then, for any \(v,w\in D\), </p>
<div class="displaymath" id="eq4.8">
  \begin{align} & \| [F^{\prime \prime }(x)-F^{\prime \prime }(x_0)](vw)\| \le \label{eq4.8}\\ & \le |\lambda |(1+\tfrac {1}{n})(2+\tfrac {1}{n})\max _{s\in [a^\prime ,b^\prime ]}\int ^{b^\prime }_{a^\prime } |k(s,t)|\cdot \big|x(t)^{\tfrac {1}{n}}-f(t)^{\tfrac {1}{n}}\big|{\rm d}t\| vw\| \nonumber \\ & \! =\! |\lambda |(1\! +\! \tfrac {1}{n})(2\! +\! \tfrac {1}{n})\max _{s\in [a^\prime ,b^\prime ]}\int ^{b^\prime }_{a^\prime } |k(s,t)|\tfrac {|x(t)\! -\! f(t)|}{x(t)^{\tfrac {n-1}{n}}+x(t)^{\tfrac {n-2}{n}}f(t)^{\tfrac {1}{n}}+\cdots +f(t)^{\tfrac {n-1}{n}}} {\rm d}t\| vw\| \nonumber \\ & \le |\lambda |(1+\tfrac {1}{n})(2+\tfrac {1}{n})\max _{s\in [a^\prime ,b^\prime ]}\int ^{b^\prime }_{a^\prime } |k(s,t)|\tfrac {|x(t)-f(t)|}{f(t)^{\tfrac {n-1}{n}}} {\rm d}t\| vw\| \nonumber \\ & \le \tfrac {|\lambda |(1+\tfrac {1}{n})(2+\tfrac {1}{n})}{\gamma ^{\tfrac {n-1}{n}}} \max _{s\in [a^\prime ,b^\prime ]}\int ^{b^\prime }_{a^\prime } |k(s,t)|\cdot |x(t)-f(t)| {\rm d}t\| vw\| \nonumber \\ & \le \tfrac {|\lambda |(1+\tfrac {1}{n})(2+\tfrac {1}{n})M}{\gamma ^{\tfrac {n-1}{n}}}\| x-x_0\| \| vw\| ,\nonumber \end{align}
</div>
<p> which means </p>
<div class="equation" id="eq4.9">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.9} \begin{array}{lll} \| F^{\prime \prime }(x)-F^{\prime \prime }(x_0)\|  \le \tfrac {|\lambda |(1+\tfrac {1}{n})(2+\tfrac {1}{n})M}{\gamma ^{\tfrac {n-1}{n}}}\| x-x_0\| . \end{array} \end{equation}
  </div>
  <span class="equation_label">4.53</span>
</p>
</div>
<p> Next, we give a bound for \(\| F^\prime (x_0)^{-1}\| \). Using (4.6), we have that </p>
<div class="equation" id="eq4.10">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.10} \| I-F^\prime (x_0)\| \le |\lambda |(2+\tfrac {1}{n}) \delta ^{1+\tfrac {1}{n}}M. \end{equation}
  </div>
  <span class="equation_label">4.54</span>
</p>
</div>
<p> It follows from the Banach theorem that \(F^\prime (x_0)^{-1}\) exists if \(|\lambda |(2+\tfrac {1}{n})\delta ^{1+\tfrac {1}{n}}M{\lt}1\), and </p>
<div class="equation" id="eq4.11">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.11} \| F^\prime (x_0)^{-1}\| \le \tfrac {1}{1-|\lambda |(2+\tfrac {1}{n}) \delta ^{1+\tfrac {1}{n}}M}. \end{equation}
  </div>
  <span class="equation_label">4.55</span>
</p>
</div>
<p> On the other hand, we have from (4.5) and (4.7) that \(\| F(x_0)\| \le |\lambda |\delta ^{2+\tfrac {1}{n}}M\) and \(\| F^{\prime \prime }(x_0)\| \le |\lambda |(1+\tfrac {1}{n})(2+\tfrac {1}{n})\delta ^{\tfrac {1}{n}}M\). Hence, if \(|\lambda |(2+\tfrac {1}{n})\delta ^{1+\tfrac {1}{n}}M{\lt}1\), the weak Lipschitz condition (1.3) is true for </p>
<div class="equation" id="eq4.12">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.12} L=\tfrac {|\lambda |(1+\tfrac {1}{n})(2+\tfrac {1}{n})M}{\gamma ^{\tfrac {n-1}{n}}[1-|\lambda |(2+\tfrac {1}{n}) \delta ^{1+\tfrac {1}{n}}M]} \end{equation}
  </div>
  <span class="equation_label">4.56</span>
</p>
</div>
<p> and constants \(\eta \) and \(\beta \) in Theorem 2.1 can be given by </p>
<div class="equation" id="eq4.13">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.13} \eta =\tfrac {|\lambda |\delta ^{2+\tfrac {1}{n}}M}{1-|\lambda |(2+\tfrac {1}{n}) \delta ^{1+\tfrac {1}{n}}M},\quad \beta =\tfrac {|\lambda |(1+\tfrac {1}{n})(2+\tfrac {1}{n})\delta ^{\tfrac {1}{n}}M}{1-|\lambda |(2+\tfrac {1}{n}) \delta ^{1+\tfrac {1}{n}}M}. \end{equation}
  </div>
  <span class="equation_label">4.57</span>
</p>
</div>
<p> Next we let \([a^\prime ,b^\prime ]=[0,1]\), \(n=2\), \(f(s)=1\), \(\lambda =0.8\) and \(k(s,t)\) is the Green kernel on \([0,1]\times [0,1]\) defined by </p>
<div class="equation" id="eq4.14">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.14} G(s,t)=\left\{  \begin{array}{ll} t(1-s), &  \hbox{$t\le s$;} \\ s(1-t), &  \hbox{$s\le t$.} \end{array} \right. \end{equation}
  </div>
  <span class="equation_label">4.58</span>
</p>
</div>
<p> Consider the following particular case of (4.4): </p>
<div class="equation" id="eq4.15">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.15} u(s)=f(s)+0.8\int ^1_0 G(s,t) u(t)^{\tfrac {5}{2}}{\rm d}t,\quad s\in [0,1]. \end{equation}
  </div>
  <span class="equation_label">4.59</span>
</p>
</div>
<p> Then, \(\gamma =\delta =1\) and \(M=\tfrac {1}{8}\). Moreover, we have that </p>
<div class="equation" id="eq4.16">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.16} \eta =\tfrac {2}{15},\quad \beta =\tfrac {1}{2},\quad L=\tfrac {1}{2}. \end{equation}
  </div>
  <span class="equation_label">4.60</span>
</p>
</div>
<p> By (1.6), we get </p>
<div class="equation" id="eq4.17">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.17} R=\tfrac {\sqrt{\beta ^2+4L}-\beta }{2L}=1. \end{equation}
  </div>
  <span class="equation_label">4.61</span>
</p>
</div>
<p> Hence, \(\overline{U}(x_0,R)\subset D\). We can also verify that function \(\phi \) has the minimized zero \(R_0=0.15173576\) on \((\eta _0,R)\), and conditions </p>
<div class="displaymath" id="a0000000070">
  \[  \eta =0.137931034 {\lt}\tfrac {R}{1+\tfrac {\beta }{2}}=0.8,  \]
</div>
<div class="displaymath" id="a0000000071">
  \[  (LR_0+\beta )\eta _0^2=0.010955869\le 4R_0^2\beta (1-\alpha )^2=0.076703659,  \]
</div>
<div class="displaymath" id="a0000000072">
  \[  (LR_0+\beta )\eta _0^2=0.010955869{\lt}2R_0(1-\alpha )^2=0.25275406  \]
</div>
<p> are satisfied. Hence, all conditions in Theorem 2.1 are satisfied. Consequently, sequence \(\{ x_n\} \) generated by Halley’s method (1.2) with initial point \(x_0\) converges to the unique solution \(x^\star \) of Eq. (4.5) on \(\overline{U}(x_0,1)\).<span class="qed">â–¡</span></p>

  </div>
</div> </p>
<p><div class="example_thmwrapper " id="a0000000073">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">9</span>
  </div>
  <div class="example_thmcontent">
  <p>Let \(X=Y=\mathbb R\), \(D=(-1,1)\) and define \(F\) on \(D\) by </p>
<div class="equation" id="eq4.18">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq4.18} F(x)=e^x-1. \end{equation}
  </div>
  <span class="equation_label">4.62</span>
</p>
</div>
<p> Then, \(x^\star =0\) is a solution of Eq. (1.1), and \(F^\prime (x^\star )=1\). Note that for any \(x\in D\), we have </p>
<div class="displaymath" id="eq4.19">
  \begin{align}  \label{eq4.19}|F^\prime (x^\star )^{-1}(F^\prime (x)-F^\prime (x^\star ))|& =|F^\prime (x^\star )^{-1}(F^{\prime \prime }(x)-F^{\prime \prime }(x^\star ))|\\ & =|{\rm e}^x-1|=|x(1+\tfrac {x}{2!}+\tfrac {x^2}{3!}+\cdots )|\nonumber \\ & \le |x(1+\tfrac {1}{2!}+\tfrac {1}{3!}+\cdots )|=({\rm e}-1)|x-x^\star |.\nonumber \end{align}
</div>
<p>Then, we can choose \(d=l={\rm e}-1\) in Theorem 3.1. It is easy to get \(c=1\), \(r_0=0.2837798914\), \(r_1=0.2575402082\) and \(r=r_1\). Then, all conditions of Theorem 3.1 are satisfied. Let us choose \(x_0=0.25\). Suppose sequence \(\{ x_n\} \) is generated by Halley’s method (1.2). Table 1 gives a comparison results of error estimates for Example 4.3, which shows that error estimates (3.9) are true. </p>
<div class="centered"> <small class="scriptsize"><small class="small"><div class="table"  id="tab:1">
   <figcaption>
  <span class="caption_title">Table</span> 
  <span class="caption_ref">1</span> 
  <span class="caption_text"><small class="footnotesize">The comparison results of error estimates for Example 4.3</small></span> 
</figcaption>  <table class="tabular">
  <tr>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:left" 
        rowspan=""
        colspan="">
      <p> &#8195;\(n\)&#8195;&#8195;&#8195;&#8195;</p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> &#8195;&#8195;the left-side of (3.9)&#8195;&#8195;&#8195;&#8195;&#8195;</p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> &#8195;&#8195;the right-side of (3.9)&#8195;&#8195;&#8195;&#8195;</p>

    </td>
  </tr>
  <tr>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:left" 
        rowspan=""
        colspan="">
      <p>&#8195;0 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 1.29<span class="rm">e</span>-03 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 1.84<span class="rm">e</span>-01</p>

    </td>
  </tr>
  <tr>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:left" 
        rowspan=""
        colspan="">
      <p>&#8195;1 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 1.81<span class="rm">e</span>-10 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 3.66<span class="rm">e</span>-09</p>

    </td>
  </tr>
  <tr>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:left" 
        rowspan=""
        colspan="">
      <p>&#8195;2 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 4.91<span class="rm">e</span>-31 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 9.90<span class="rm">e</span>-30</p>

    </td>
  </tr>
  <tr>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:left" 
        rowspan=""
        colspan="">
      <p>&#8195;3 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 9.84<span class="rm">e</span>-93 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 1.99<span class="rm">e</span>-91</p>

    </td>
  </tr>
  <tr>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:left" 
        rowspan=""
        colspan="">
      <p>&#8195;4 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 7.93<span class="rm">e</span>-278 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> 1.60<span class="rm">e</span>-276</p>

    </td>
  </tr>
  <tr>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:left; border-bottom-style:solid; border-bottom-color:black; border-bottom-width:1px" 
        rowspan=""
        colspan="">
      <p>&#8195;5 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center; border-bottom-style:solid; border-bottom-color:black; border-bottom-width:1px" 
        rowspan=""
        colspan="">
      <p> 4.16<span class="rm">e</span>-833 </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center; border-bottom-style:solid; border-bottom-color:black; border-bottom-width:1px" 
        rowspan=""
        colspan="">
      <p> 8.39<span class="rm">e</span>-832</p>

    </td>
  </tr>
</table> 
</div></small></small> </div>
<p><span class="qed">â–¡</span></p>

  </div>
</div> </p>
<p><div class="acknowledgement_thmwrapper " id="a0000000074">
  <div class="acknowledgement_thmheading">
    <span class="acknowledgement_thmcaption">
    Acknowledgements
    </span>
  </div>
  <div class="acknowledgement_thmcontent">
  <p>This work was supported by National Natural Science Foundation of China (Grant No. 10871178). </p>

  </div>
</div> </p>
<p><small class="footnotesize">  </small></p>
<div class="bibliography">
<h1>Bibliography</h1>
<dl class="bibliography">
  <dt><a name="Arg1">1</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">The convergence of Halley-Chebyshev type method under Newton-Kantorovich hypotheses</i>, Appl. Math. Lett., <b class="bf">6</b> (1993), pp. 71–74. </p>
</dd>
  <dt><a name="Arg2">2</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">Computational theory of iterative methods</i>, Series: Studies in Computational Mathematics 15, Editors, C.K. Chui and L. Wuytack, Elservier Publ. Co. New York, USA, 2007. </p>
</dd>
  <dt><a name="Arg3">3</a></dt>
  <dd><p><i class="sc">I.K. Argyros, Y.J. Cho</i> and <i class="sc">S. Hilout</i>, <i class="it">On the semilocal convergence of the Halley method using recurrent functions</i>, J. Appl. Math. Computing., <b class="bf">37</b> (2011), pp. 221–246. </p>
</dd>
  <dt><a name="Arg3.5">4</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i> and <i class="sc">H.M. Ren</i>, <i class="it">Ball convergence theorems for Halley’s method in Banach spaces</i>, J. Appl. Math. Computing, <b class="bf">38</b> (2012), pp. 453–465. </p>
</dd>
  <dt><a name="Arg4">5</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i> and <i class="sc">H.M. Ren</i>, <i class="it">On the Halley method in Banach space</i>, Applicationes Mathematicae, to appear 2012. </p>
</dd>
  <dt><a name="Deu">6</a></dt>
  <dd><p><i class="sc">P. Deuflhard</i>, <i class="it">Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms</i>, Springer-Verlag, Berlin, Heidelberg, 2004. </p>
</dd>
  <dt><a name="Gut">7</a></dt>
  <dd><p><i class="sc">J.M. Gutiérrez</i> and <i class="sc">M.A. Hernández</i>, <i class="it">Newton’s method under weak Kantorovich conditions</i>, IMA J. Numer. Anal., <b class="bf">20</b> (2000), pp. 521–532. </p>
</dd>
  <dt><a name="Xu">8</a></dt>
  <dd><p><i class="sc">X.B. Xu</i> and <i class="sc">Y.H.</i>, <i class="it">Ling, Semilocal convergence for Halley’s method under weak Lipschitz condition</i>, Appl. Math. Comput., <b class="bf">215</b> (2009), pp. 3057–3067. </p>
</dd>
</dl>


</div>
</div> <!--main-text -->
</div> <!-- content-wrapper -->
</div> <!-- content -->
</div> <!-- wrapper -->

<nav class="prev_up_next">
</nav>

<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/jquery.min.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/plastex.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/svgxuse.js"></script>
</body>
</html>