Return to Article Details Efficient global optimization of multivariate functions with unknown H\"{o}lder constants via \(\alpha\)-dense curves

Efficient global optimization of multivariate functions with unknown Hölder constants via α-dense curves

Nor Sahnoune, Djaouida Guettal and Mohamed Rahal
(Date: August 02, 2025; accepted: December 04, 2025; published online: December 15, 2025.)
Abstract.

In this paper, we consider the global optimization problem of non-smooth functions over an n-dimensional box, satisfying the Hölder condition. We focus our study on the case when the Hölder constant is not a priori known. We develop and analyze two algorithms. The first one is an extension version of the Piyavskii’s method that adaptively construct a linear secants of the Hölder support functions, avoiding the need for a known Hölder constant. The second algorithm employs a reducing transformation approach which consists of generating, in the feasible box, an α-dense curves, effectively converting the multivariate initial problem to a problem of a single variable. We prove the convergence of both algorithms. Their practical efficiency is evaluated through numerical experiments on some test functions and comparison with existing techniques.

Key words and phrases:
Global optimization, Hölder condition, covering methods, Piyavskii’s method, reducing transformation, α-dense curves.
2005 Mathematics Subject Classification:
90C26, 65K05
Laboratory of Fundamental and Numerical Mathematics (LMFN), Department of Mathematics, University of Setif 1 Ferhat Abbas, Setif 19000, Algeria, e-mail: nor.sahnoune@univ-setif.dz., e-mail: mrahal@univ-setif.dz
Laboratory of Fundamental and Numerical Mathematics (LMFN), Department of Basic Education in Technology, University of Setif 1 Ferhat Abbas, Setif 19000, Algeria,, e-mail: djaouida.guettal@univ-setif.dz.
This work has been supported by La Direction Générale de la Recherche Scientifique et du Développement Technologique (DGRSDT-MESRS), under projects PRFU number C00L03UN190120190004.

1. Introduction

Let us consider the following box constrained global optimization problem of finding at least one point xD and the corresponding optimal value g such that:

(1) g=g(x)=minxDg(x)

where the objective function g:D is non-smooth non convex. It is assumed to satisfy the Hölder condition on the n-dimensional compact box D:

D=[a1,b1]××[an,bn]n.

Specifically, there exist a Hölder constant hg>0 and a Hölder exponent 1/β(β>1) such that:

(2) |g(x)g(y)|hgxy1/β,for all x,yD.

Here, denotes the Euclidean norm. A primary focus of this work is when the Hölder constant hg is not known a priori.

Many real-world optimization problems are complex, involve minimizing continuous functions of n variables possessing multiple extrema over the feasible domain. Frequently, derivative information is either unavailable or its computation is expensive, rendering standard non-linear programming methods ineffective for finding global optima, as they typically converge only to local minima. Global optimization has thus become an active field of research [11]. The challenges are often amplified by local irregularities (non-smoothness) and the potential existence of numerous local minima within the feasible set. Deterministic global optimization strategies, particularly covering methods, offer a rigorous approach to tackle such problems [11, 19]. The first work on covering methods for univariate Lipschitz functions (the case β=1 in (2)) was performed by Piyavskii [13], Evtushenko [6], and Shubert [17]. The widely discussed Piyavskii-Shubert algorithm guarantees convergence to the global minimum by constructing a sequence of improving piecewise lower-bounding functions (sub-estimators) based on the known Lipschitz constant [11]. More recently, research has addressed the optimization of less regular functions satisfying the Hölder condition with β>1 [7, 12]. Gourdin et al. were among the first to study the global optimization of both univariate and multivariate Hölder functions using such deterministic approaches [7, 10].

The aim of this paper is twofold. First, we develop a novel technique for univariate Hölder optimization (n=1), extending the ideas of Piyavskii. This method utilizes the secants linked to the Hölder support functions between evaluated points. We address both the case where the Hölder constant hg is known and significantly, the case where the constant hg is unknown. For the latter, we propose an adaptive estimation scheme for hg, which is crucial for practical application and performance. Second, we address the multivariate case (n>1). Direct generalization of Piyavskii’s algorithm to higher dimensions is computationally challenging, primarily because finding the minimum of the multivariate sub-estimator involves determining intersections of complex hypersurfaces [7, 20]. To overcome this, we propose an approach based on dimension reduction. This method transforms the original n-dimensional problem (1) into a one-dimensional Hölder optimization problems by exploring the box D along an α-dense curve [22, 23, 24]. The univariate algorithm developed in the first part can then be effectively applied to these simpler problems.

The paper is organized as follows. Section 2 presents the univariate algorithm based on Hölder support functions and secant information. Section 3 describes the reducing transformation method for the multivariate case, detailing the use of a specific α-dense curve. Section 4 provides numerical results from experiments on test functions and compares the performance with existing methods. Section 5 concludes the work.

2. Univariate Hölder global optimization

Let us begin by considering the problem (1), (2) for a function f with a single variable, i.e.,

(3) minx[a,b]f(x),a,b

where the objective function f satisfying the following Hölder condition

(4) |f(x)f(y)|hf|xy|1/β, for all x,y[a,b].

with a constant hf>0 and β>1. Let ε>0 be the desired accuracy with which the global minimum to be searched.

2.1. Sub-estimator function

When minimizing a non-convex function f over a feasible set, the general principle behind most deterministic global optimization methods is to relax the original non-convex problem in order to make the relaxed problem convex by utilizing a sub-estimator of the objective function f. The Hölder condition (4) allows one to construct such a lower bound for f over an interval [a,b]. From (4) we have

f(y)hf|xy|1/βf(x),x,y[a,b].

Let xi1 and xi be two distinct points in [a,b], typically with xi1<xi. We define the following functions:

{Ui1(x)=f(xi1)hf(xxi1)1/β,for xxi1,Ui(x)=f(xi)hf(xix)1/β,for xxi.

By construction setting y=xi1 or y=xi in the Hölder inequality, these functions are lower bounds for f(x) on their respective domains of definition within [a,b]:

{Ui1(x)f(x),x[a,b] such that xxi1,Ui(x)f(x),x[a,b] such that xxi.

The functions Ui1 and Ui are thus sub-estimators of f. Over any sub-interval [xi1,xi][a,b], both functions are well-defined lower bounds. We can then define a tighter sub-estimator ψi(x) on this sub-interval:

ψi(x)=max{Ui1(x),Ui(x)}.

This function ψi(x) is convex (as the maximum of two concave functions, assuming 1/β1) and non-differentiable (typically at the point where Ui1(x)=Ui(x)). It also satisfies:

(5) ψi(x)f(x),x[xi1,xi],

The global minimum of ψi(x) over [xi1,xi] occurs at the point x¯ where Ui1 and Ui(x) intersect (assuming such an intersection exists within the interval). This point x¯ is given by:

x¯=argmin[xi1,xi]ψi(x).

Thus, in each iteration of Piyavskii’s algorithm, we must solve the following non-linear equation to find this intersection point:

(6) Ui1(x)Ui(x)=0.

Determining the unique solution x¯ to equation (6) within [xi1,xi] is generally straightforward only for specific values of β. For instance, Gourdin et al. [7] provide analytical expressions for this solution when β{2,3,4}, assuming hf is known a priori. However, when β is large or not an integer, solving equation (6) can be as complex as a general non-linear local optimization problem. To overcome this difficulty, we propose a new procedure below.

2.2. Procedure approach to the intersection point

Let θi be the same midpoint of the intervals [Ui(xi1),Ui1(xi1)] and [Ui1(xi),Ui(xi)] so

(7) θi=f(xi1)+f(xi)hf(xixi1)1/β2.

As Ui1(x) and Ui(x) are monotone continuous functions on the interval [xi1,xi] then they are bijective functions. Let Ui11 and Ui1 be the inverse functions respectively of Ui1 and Ui. We denote by μi=Ui11(θi) and ϑi=Ui1(θi). According to the definition of Ui1 and Ui we have:

(8) {μi=xi1+(f(xi1)θihf)β,ϑi=xi(f(xi)θihf)β.

Let S(x) and S+(x) be the two straight secants linked to the parables Ui1 and Ui on the interval [μi,ϑi][xi1,xi] and joining respectively the points (μi,θi), (ϑi,Ui1(ϑi)) and (μi,Ui(μi)), (ϑi,θi), we get

S(x)=f(xi1)h(ϑixi1)1/β+h(μixi1)1/βh(ϑixi1)1/βϑiμi(xϑi),
S+(x)=f(xi)h(xiϑi)1/β+h(xiμi)1/βh(xiϑi)1/βϑiμi(xϑi).

The intersection point of the two secants S and S+ is the point zi the approximate point x¯ of the two parables Ui1 and Ui (see Fig. 1).

Refer to caption
Figure 1. Illustration of the secant-based approximation for the sub-estimator’s intersection point.

Then

(9) zi=[f(xi1)f(xi)hf(ϑixi1)1/β+hf(xiϑi)1/β](ϑiμi)hf[(xiμi)1/β(μixi1)1/β(xiϑi)1/β+(ϑixi1)1/β]+vi.
Remark 1.

If f(xi1)=f(xi), the approximate point x¯ is immediately the midpoint of the interval [xi1,xi]. In this case, one can choose:

(10) zi=x¯=xi1+xi2.
Proposition 2.

Let f be a real univariate Hölder function with the constant hf>0 and β>1 defined on the interval [a,b]. Let the value 𝐌i=min{Ui1(zi),Ui(zi)} (as a constant lower bound of f on [xi1,xi][a,b]). Then we have:

(11) 𝐌i<f(x),x[xi1,xi].
Proof.

The value 𝐌i is given by replacing the variable x in the two functions Ui1(x) and Ui(x) by the expression of zi. Since we have defined the function

ψi(x)=max{Ui1(x),Ui(x)},

as a sub-estimator function of f(x) over the interval [xi1,xi]. Then,

ψi(x)f(x),x[xi1,xi].

The function Ui1(x) is strictly decreasing and Ui(x) is strictly increasing over the interval [xi1,xi]. Thus, it follows

min{Ui1(x),Ui(x)}min[xi1,xi]ψi(x),x[xi1,xi].

In particular, for x=zi it then follows:

𝐌i=min{Ui1(zi),Ui(zi)}<f(x),x]xi1,xi[.

Algorithm 1 MSPA with known value of hf
1:Step 1: Initialization
2:[a,b] a given search, Hölder parameters hf,β>1,
3:ε accuracy of global minimization
4:k2,x1a,x2b
5:Step k: The sampling points x1,x2,,xk are ordered such that
a=x1<x2<<xk=b.
6:for i=2 to k do
7:  if f(xi1)f(xi) then
8:   θi as defined in (7), μi,ϑi as defined in (8), zi as defined in (9)
9:  else
10:   zi=xi1+xi2
11:  end if𝐞𝐧𝐝𝐢𝐟
12:  𝐌i=min{f(xi1)hf(zixi1)1/β,f(xi)hf(xizi)1/β}
13:end for𝐞𝐧𝐝𝐟𝐨𝐫
14:
(12) 𝐌ρ=min{𝐌i : 2ik}
15:zρ=argmin{𝐌ρ}
16:xρ=zρ
17:if |xρxρ1|>ε then
18:  
(13) xk+1=zρ
19:  kk+1
20:  go to the Step k
21:else
22:  fopt=min{f(xi):1ik}
23:  Stop
24:end if𝐞𝐧𝐝𝐢𝐟
25:return fopt.

2.3. Convergence Result of MPSA

Theorem 3.

Let f(x) be a real function defined on a closed interval [a,b], satisfying (4) with hf>0 and β>1. Let x be a global minimizer of f(x) over [a,b]. Then the sequence (xk)k1 generated by the MSPA algorithm converges to x. i.e.,

limk+f(xk)=f(x)=minx[a,b]f(x).
Proof.

The proof is based on the construction of a sequence of points (xk)k1 generated by Algorithm 1, which converges to a limit point that is the global minimizer of f on [a,b]. Let x1,x2,x3, be the sampling sequence satisfying (9), (12), (13). Let us consider that xmxm for all mm, the set of the elements of the sequence (xk)k1 is then infinite and therefore has at least one limit point in [a,b]. Let 𝐳 be any limit point of (xk)k1 such that 𝐳a, 𝐳b, then the convergence to 𝐳 is bilateral (one can see [12]). Consider the interval [xμ1,xμ] determined by (12) at the (k+1)-th iteration. According (9) and (13), we have that the new point xk+1 divides the interval [xμ1,xμ] into the subintervals [xμ1,xk+1] and [xk+1,xμ], so we can deduce

(14) max{xk+1xμ1,xμxk+1}|xμxμ1|.

Consider now an interval [xρ(k)1,xρ(k)] which contains 𝐳, using (9), (12), (13) and (14), we obtain:

(15) limk+(xρ(k)1xρ(k))=0.

In addition, the value 𝐌ρ(k) that corresponds to [xρ(k)1,xρ(k)], is given by

(16) 𝐌ρ(k)=min{f(xρ(k)1)hf(zρxρ(k)1)1/β,f(xρ(k))hf(xρ(k)zρ)1/β},

where zρ is obtained by replacing i by ρ in (9). As 𝐳[xρ(k)1,xρ(k)] and from (15) then we have

(17) limk+𝐌ρ(k)=f(𝐳).

On the other hand, according to (11)

(18) 𝐌j(k)f(x),x[xj(k)1,xj(k)].

From (12), 𝐌ρ(k)=min{𝐌j,j=2,,k}, then

𝐌ρ(k)𝐌j(k),x[xj(k)1,xj(k)],

and since [a,b]=j=2𝑘[xj(k)1,xj(k)] , hence

(19) limk+𝐌ρ(k)𝐌j(k),x[a,b],

and from (18),(19) we get

limk+𝐌ρ(k)f(x),x[a,b].

Since x is the global minimizer of f over [a,b]

limk+𝐌ρ(k)f(x)f(𝐳),

thus, we have

0f(𝐳)f(x)f(𝐳)limk+𝐌ρ(k)=0,

then

f(𝐳)=f(x)

By the condition (4), the function f  must be continuous on [a,b] so that

f(𝐳)=f(limk+xk)=limk+f(xk) =f(x).

2.4. Estimating of unknown Hölder constant

The algorithm MSPA presented in the section 2 of this paper is applied to a class of Hölder continuous functions defined on the closed and bounded interval [a,b] of when suppose a priori knowledge of the Hölder constant hf>0. However, the constant hf may be, in most situations, unknown and no procedure is available to obtain a guaranteed overestimate of it. Then, there is a need to find an approximate evaluation of a minimal value of hf. The algorithm we will extend does not require a priori knowledge of the hf. To over come this situation, a typical procedure is to look for an approximation of hf during the course of search ([11], [12], [21]). We consider a global estimate of hf, for each iteration. Let the sampling points a=x1<x2<<xn=b and the values f(x1),f(x2),,f(xn) are also calculated. Let

hfi=|f(xi)f(xi1)|(xixi1)1/β,fori=2,,k

and

hfk=max{hfi,i=2,,k}.

Let λ>1 be the multiplicative parameter as an input of the algorithm and ν be a small positive number. The global estimate of hf then is given by:

h~f={λhfk,iff(xi)f(xi1),i2λν,else.

The condition f(xi1)f(xi), for all i, indicates that the objective function f is not constant over the feasible interval [a,b]. Now from the definition of the global estimate h~f, we replace in the formulas of θi,μi and ϑi, the constant hf by h~f also the point (zi,𝐌i(zi)) is replaced in the structure of the algorithm MSPA by the point (zi~,𝐌i(zi~)) where:

(20) zi~=[f(xi1)f(xi)h~f(ϑixi1)1/β+h~f(xiϑi)1/β](ϑiμi)h~f[(xiμi)1/β(μixi1)1/β(xiϑi)1/β+(ϑixi1)1/β]+vi
(21) 𝐌i(zi~)=min{f(xi1)h~f(zi~xi1)1/β,f(xi)h~f(xizi~)1/β}.

Finally, from (20) and (21) we obtain an algorithm noted MSPAes which is based on the use of the estimation constant h~f during the course of the algorithm.

3. Multivariate Hölder global optimization

Let us consider now the multivariate case, i.e., the problem (1), (2) with xn and n2.

3.1. Reducing transformation procedure

Directly applying Piyavskii’s method to multivariate Hölder optimization is often impractical due to the computational cost of repeatedly minimizing the multivariate sub-estimator function [7, 20]. This sub-estimator is typically the upper envelope of many individual support functions (g(xi)hgxxi1/β), and finding its minimum involves complex geometric operations (finding intersections of hypersurfaces).

To circumvent this difficulty, we employ a dimension reduction strategy. Such approaches, often utilizing space-filling or α-dense curves to map the n-dimensional domain to a one-dimensional interval, have a history in global optimization, see, e.g., Butz [4], Strongin [18] and Ziadi et al. [9, 16, 22, 23]).

Our proposed multivariate algorithm combines a specific reducing transformation with the specialized univariate Hölder optimization algorithm MSPA developed in Section 2. The key idea is to define a parametric curve Cα:[0,T]D, where Dn is the original search box, such that the curve Cα(t)=(c1(t),,cn(t)) is α-dense in D. This property guarantees that for any point xD, there exists a t[0,T] such that Cα(t)xα.

Definition 4.

Let 𝕁 be an interval of . We say that a parametrized curve of n defined by Cα:𝕁D=[a1,b1]××[an,bn] is α-dense in D, if for all xD, t𝕁 such that

d(x,Cα(t))α,

where d stands for Euclidean distance in n.

By restricting the objective function g to this curve, we obtain a univariate function f:[0,T], defined by:

f(t)=g(Cα(t)).

Crucially, if g is Hölder continuous and Cα(t) is sufficiently regular (Lipschitz), the resulting function f(t) is also Hölder continuous on the interval [0,T]. The original n-dimensional problem (1) is thus effectively reduced to the one-dimensional problem:

(P) mint[0,T]f(t).

This univariate problem can be efficiently solved using the algorithm from Section 2. The quality of the approximation to the original problem depends on the density parameter α of the curve.

Theorem 5.

Let Cα(t)=(c1(t),,cn(t)) be a function defined from [0,T] into D. Let α>0 and μ denote the Lebesgue measure, such that:

  1. (1)

    (ci)1in are continuous and surjective.

  2. (2)

    (ci)2in are periodic with respective periods (pi)2in.

  3. (3)

    For any interval I[0,T] and for any i{2,,n}, we have:

    μ(I)piμ(ci1(I))<α.

Then, for t[0,T], the function Cα(t) is a parametrized n1α-dense curve in D. (The proof can be found in [24]).

Corollary 6.

Let Cα(t)=(c1(t),,cn(t)):[0,πα1]D a function defined by:

ci(t)=aibi2cos(αit)+ai+bi2,i=1,2,,n,

where α1,,αn are given strictly positive constants satisfying the relationships

αiπα(bi1ai1)αi1,i=2,,n.

Then the curve defined by the parametric curve Cα(t) is n1α-dense in D [24].

Remark 7.

According to Corollary 6, the parametrized curve Cα(t) is α-dense in the box D. Moreover, the function Cα is Lipschitz on [0,πα1] with constant:

Lα=12(i=1𝑛(biai)2αi2)1/2.
Refer to caption

α=0.1

Refer to caption

α=0.3

Refer to caption

α=0.1

Figure 2. The densification of the square [1,1]2 and the cube [1,1]3 by the support of α-dense curves with different values of α.
Theorem 8.

The function f(t)=g(Cα(t)) for t[0,πα1] is a Hölder function with constant Hf=hgLα1/β and exponent β>1.

Proof.

For t1 and t2 in [0,πα1], we have

|f(t1)f(t2)|=|g(Cα(t1))g(Cα(t2))|hgCα(t1)Cα(t2)1/β.

As the function of the parametric curve Cα is Lipschitz on [0,πα1] with the constants Lα, we have

Cα(t1)Cα(t2)Lα|t1t2|,

then

|f(t1)f(t2)|hg(Lα|t1t2|)1/β,

hence

|f(t1)f(t2)|hgLα1/β|t1t2|1/β.

3.2. The Mixed RT-MSPA Method

For determine the global minimum of g on D the mixed RT-MSPA method consists of two steps: the reducing transformation step and the application of the MSPA algorithm to the function f(t) on [0,πα1].

Algorithm 2 RT-MSPA with known value of hg
D=[a1,b1]××[an,bn] the search box.
The multivariate objective function with known Hölder parameters hg>0
and β>1. ε>0 small accuracy of the global minimization.
First part: Cα(t) the parametric α-dense curve in D.
f(t) the univariate Hf-Hölder function.
Second part: fopt the best global minimum of f.
First part:
Define Cα(t), t[0,π/α1]D
α=(ε2Hf)β, α1=1.
for i=2 to n do
  αi=πα(bi1ai1)αi1
end for𝐞𝐧𝐝𝐟𝐨𝐫
for i=1 to n do
  ci(t)=aibi2cos(αit)+ai+bi2
end for𝐞𝐧𝐝𝐟𝐨𝐫
Cα(t)=(c1(t),c2(t),,cn(t)) and f(t)=g(Cα(t)).
Second part:
Initialization
[0,π] the search interval
k2,ρ2,t10,t2π
Step k: t1,t2,,tk are ordered such that 0=t1<t2<<tk=π.
for i=2 to k do
  [ti1,ti][0,π]
  if f(ti1)f(ti) then
   θi, μi,ϑi and zi are defined respectively in (5), (6) and (7) (with respect to the variable t>0.)
  else
   zi=ti1+ti2
  end if𝐞𝐧𝐝𝐢𝐟
  𝐌i=min{f(ti1)Hf(ziti1)1/β,f(ti)Hf(tizi)1/β}
end for𝐞𝐧𝐝𝐟𝐨𝐫
𝐌ρ=min{𝐌i : 2ik}
tρ=zρ
if |tρtρ1|>ε then
  tk+1=zρ
  kk+1
  go to the Step k
else
  fopt=min{f(ti):1ik}
  Stop
end if𝐞𝐧𝐝𝐢𝐟
return fopt

3.3. Convergence Result of RT-MPSA

Theorem 9.

Let g be a Hölder function satisfying the condition (2) over D and 𝐦 be the global minimum of g on D. Then the mixed RT-MSPA algorithm converges to the global minimum with an accuracy at least equal to ε.

Proof.

Denote by 𝐦 the global minimum of f on [0,πα1], where f(t)=g(Cα(t)). On the other hand, let us designate by fε the global minimum of the problem (P) obtained by the mixed method RT-MSPA. Let us show that

fε𝐦ε.
  1. 1)

    As g is continuous on D, there exists a point 𝐱D such that 𝐦=g(𝐱). Moreover, there exists t0[0,πα1] such that 𝐱Cα(t0)(ε2hg)β so that

    g(𝐱)g(Cα(t0))ε2.

    And therefore

    g(Cα(t0))𝐦ε2.

    But from 𝐦𝐦g(Cα(t0)), we deduce that

    (22) 𝐦𝐦ε2.
  2. 2)

    As f is continuous on [0,πα1], there exists a point t[0,πα1] such that 𝐦=f(t), involving t as a global minimizer of f. Then t is a limit point of the sequence (tk)k1 obtained by the mixed algorithm.

    Hence t[tρ(k)1,tρ(k)] and limk+(tρ(k)tρ(k)1)=0. i.e.,

    ε>0,K such that kK,|tρ(k)tρ(k)1|<ε.

    On the other hand, since fε is the global minimum obtained after k iterations, we obtain:

    tε[ts1,ts]:|tsts1|(ε2Hf)β and fε=f(tε)

    so that

    {𝐌s=min{f(ts1)Hf(tεts1)1/β,f(ts)Hf(tstε)1/β},𝐌sf(t)f(tε)andt[ts1,ts].

    Consequently,

    (23) fε𝐦=f(tε)f(t)Hf|tεt|1/βε2.

Finally, from (22) and (23), the result of Theorem 9 is proved. ∎

4. Numerical experiments

This section details the numerical experiments conducted to evaluate the performance of the proposed MSPA algorithm against existing methods, specifically SPA [12] and TPA [5], [8]. Two series of standard test functions and their corresponding data are listed in Table 1 and Table 4. Most of them have different properties like non-convex, non-differentiable and have more than at least two local and global minima. The evaluation covers both single-variable and multivariate optimization problems.

For all single-variable experiments, the desired accuracy for locating the global minimum was set to ε=104(ba), where [a,b] is the search interval.

The primary performance criterion used for comparison are the total number of function evaluations (Ev) required and the CPU execution time in seconds (T(s)). Detailed results for the known and estimated Hölder constant are presented in Table 2 and Table 3, respectively. Within these tables, results shown in bold indicate the best performance achieved among the compared algorithms for each criterion (Ev and T(s)) on a given test problem.

For multivariate optimization, we implemented the Reducing Transformation (RT) approach combined with the MSPA algorithm, utilizing α-dense curves with a density parameter α=0.1. Comparative results for The known and estimated Hölder constant are presented in Table 5 and Table 6, respectively.

The experiments were implemented in MATLAB and performed on a PC.

N Function Interval Hölder constant hf Ref.
1 min{|x+4|1,|x+1|1.005,|x3|+0.5} [5,5] 2 [21]
2 |1.51.5|1x2|| [2,2] 4 new
3 {2xx2if x2x2+8x12otherwise [0,6] 9.798 [7]
4 {0.35|x0.25| if x12x otherwise [0,1] 1.35 new
5 {4|x12.5|if x128.5xotherwise [0,1] 12.5 new
6 1x2 [0.5,0.5] 2 [20]
7 2sinx1|x+1|+2 [5,5] 7.8 [5]
8 |x0.25|233cosx2 [0.5,0.5] 4.26 [14]
9 cos(x)e(1|sin(πx)0.5|π) [0,1] 4.3 [14]
10 5cosx+0.8|x| [10,8] 5.8 new
11 9.54x252 [1.5,1.5] 3 [14]
12 cos(x+π21)exp(11π|sinπ(xπ21)0.5|) [1.5,1.5] 7.3 [14]
13 |cos(π2x)||19x21|12 [1.5,1.5] 5.4 [14]
14 1sin2x [1,1] 1 new
15 16cos2x [2π,2π] 1 new
16 1x2+sinx [1,1] 2.41 new
17 |8x3|23cosxsin3x [2,2] 7.73 new
18 k=131k|sin((3k+1)x+1k)||xk|12 [0,3] 6.83 [15]
19 25x2shx [2.5,2.5] 8.36 new
20 2cosx1|x+1|+2 [5,5] 7.8 new
Table 1. Univariate Hölder test functions.
Problem number SPA TPA MSPA
N β Ev T(s) Ev T(s) Ev T(s)
1 2 78 0.1167 74 0.0639 78 0.0655
2 2 2005 1.5050 2107 1.5976 2129 1.6245
3 2 4371 9.0077 2812 4.3383 3965 7.8945
4 2 254 0.1302 96 0.0711 104 0.0735
5 2 295 0.1940 79 0.0689 110 0.0957
6 2 3433 4.0167 3061 3.1928 3013 3.1567
7 2 4403 6.6673 4185 5.9728 4487 6.9423
8 3/2 87 0.0642 86 0.0636 89 0.0677
9 2 87 0.0657 86 0.0676 82 0.0650
10 2 704 0.2988 46 0.0653 696 0.2894
11 2 2867 2.8312 1721 1.1843 2149 1.6457
12 2 207 0.1419 216 0.0942 206 0.0905
13 2 1179 0.6164 1333 0.6984 1226 0.6143
14 2 2109 1.7937 1701 1.0748 1527 0.8923
15 2 3169 3.5469 3233 3.7415 3201 3.6079
16 2 2109 1.9788 1587 0.9685 1456 0.8341
17 3/2 668 0.6360 70 0.0637 697 0.2672
18 2 1342 0.7609 874 0.3612 864 0.3611
19 2 91 0.1198 83 0.0703 79 0.0688
20 2 2787 2.7692 1835 1.2205 1809 1.2010
Table 2. Comparison of results between MSPA and the existing algorithms with known value of hf.
Problem number SPAes TPAes MSPAes
N β Ev T(s) Ev T(s) Ev T(s)
1 2 37 0.0116 48 0.0104 37 0.0064
2 2 1215 5.4778 962 3.5355 667 1.8018
3 2 2271 20.869 1819 13.717 1877 15.585
4 2 1288 6.4384 546 1.1884 240 0.2420
5 2 2854 31.324 575 1.2967 1104 4.8313
6 2 1501 8.6159 1841 12.972 1438 8.3665
7 2 1018 3.8064 1224 5.7721 1467 8.4715
8 3/2 22 0.0025 23 0.0026 21 0.0030
9 2 32 0.0049 31 0.0047 30 0.0044
10 2 986 3.7106 1062 4.3083 815 2.7356
11 2 1937 14.211 2446 22.842 2676 28.106
12 2 54 0.0126 69 0.0200 66 0.0194
13 2 1193 5.4486 914 3.2404 1364 7.3024
14 2 1391 7.3437 1707 11.168 1282 6.4865
15 2 1350 6.930 1540 9.0673 1045 4.3092
16 2 1995 15.269 1545 9.1268 1595 10.213
17 3/2 910 3.5283 638 1.7160 655 2.6685
18 2 578 1.2773 386 0.5728 437 0.7776
19 2 77 0.0246 74 0.0231 66 0.0191
20 2 687 1.7730 748 2.1631 553 1.2303
Table 3. Comparison of results between MSPAes and the existing algorithms with unknown value of hf.
N Function Box Hölder constant hg Ref.
1 k=131k|cos((3k+1)(x+5)+1k)||xy|13 [5,5]2 14.77 [20]
2 |cos(x)cos(y)e(1x2+y2π)| [1,1]×[1,2] 5.0679 [16]
3 k=1312k|cos((32k+1)x+12k)||xy|23 [0.5,0.5]2 15.8 [15]
4 |x+y0.25|233cosx2 [0.5,0.5]2 4.26 [15]
5 cos(x)sin(y)e(1x2+y2π) [1,1]×[1,2] 5.0679 new
6 max{|x|,|y|} [1,1]2 1 [3]
7 |x|+|y| [1,1]2 (2)12 [3]
8 |x|+|y| [1,1]2 2 [3]
9 |x+1|+|y+2|+|z+6| [1,1]3 3 new
10 10e0.5(|x|+|y|) [2,12]2 102 [3]
11 |3+cos(x2+y2)| [1,1]2 2 [16]
12 |sin(x2+y2)| [1,1]2 2 [16]
13 12sin(|xy|)12sin(|x+y|) [0,1]×[0,2.71] 1 [16]
14 |cos(0.5+9.5x2+y2)| [1,1]2 13.43 [16]
15 min{|x+0.35|,|y+0.25|} [1,1]2 1 new
Table 4. Multivariate Hölder test functions.
Problem number RT-SPA RT-TPA RT-MSPA
N β Ev T(s) Ev T(s) Ev T(s)
1 3 9131 28.642 8875 26.875 9055 28.103
2 2 1493 0.8627 1459 0.8278 1425 0.8030
3 3/2 6298 13.532 6255 13.316 6248 13.412
4 3/2 1053 0.4863 990 0.4411 986 0.4489
5 2 2844 2.8212 2770 2.6870 2692 2.600
6 2 218 0.0806 238 0.0868 213 0.0802
7 2 214 0.0769 196 0.0920 186 0.0734
8 2 331 0.1102 297 0.0997 319 0.1088
9 2 5795 11.432 5536 10.468 5507 10.501
10 2 4024 5.3531 4266 6.0112 3724 4.6304
11 2 2046 1.5416 2010 1.4863 1973 1.4524
12 2 509 0.1726 478 0.1615 475 0.1625
13 2 6309 13.646 6342 13.534 6257 13.335
14 2 6972 17.1265 6805 15.8143 6776 15.6564
15 2 497 0.1703 485 0.1682 480 0.1662
Table 5. Comparison of results between RT-MSPA and the existing algorithms with known value of hg.
Refer to caption
Figure 3. Illustration of the global minimization process and iteration points generated by RT-MSPA for function 12 (as defined in Table 4) on the domain [1,1]2.
Problem number RT-SPAes RT-TPAes RT-MSPAes
N β λ Ev T(s) Ev T(s) Ev T(s)
1 3 2.1 2938 36.910 2460 26.040 2317 33.541
1.07 1194 6.0336 / / 1147 8.2836
2 2 1.5 665 1.6772 614 1.4797 607 1.4337
1.07 435 0.7450 / / 231 0.2266
3 3/2 1.5 636 1.7097 742 2.2662 732 3.2807
1.07 466 0.926 / / 286 0.5239
4 3/2 1.5 276 0.3303 267 0.3049 296 0.5407
1.07 163 0.1200 / / 136 0.1193
5 2 1.5 1595 9.6273 1765 11.474 1711 11.357
1.07 1356 6.9890 / / 1289 6.7921
6 2 3.1 610 1.4298 674 1.6995 591 1.3795
7 2 3.5 / / 534 1.0745 506 1.0562
8 2 3.1 521 1.0367 1140 4.7631 529 1.1228
9 2 4.05 130 0.0697 / / 33 0.0059
10 2 1.5 1723 11.188 2355 20.513 1966 15.084
1.07 1530 8.9580 / / 965 3.8254
11 2 14.4 290 0.3300 310 0.3749 310 0.3992
12 2 3.1 472 0.8553 459 0.7910 472 0.8762
13 2 1.5 134 0.0739 147 0.0855 145 0.0873
1.07 79 0.02688 / / 55 0.01473
14 2 1.5 2792 29.536 3134 36.0570 2923 32.761
15 2 1.5 84 0.0306 33 0.0055 53 0.0135
Table 6. Comparison of results between RT-MSPAes and the existing algorithms with unknown value of hg.

4.1. Discussions and remarks

The primary contributions of this paper are the development of MSPA, an enhanced version of Piyavskii’s algorithm for finding global minima of univariate Hölder continuous functions, and its extension, RT-MSPA, for solving higher-dimensional problems.

For the univariate problem, the comparison involves two cases based on the availability of the Hölder constant hf:

  • Known Hölder Constant (hf): The standard SPA, TPA, and MSPA algorithms were used directly. The results are presented in Table 2.

  • Unknown Hölder Constant (hf): Modified versions of the algorithms, employing an estimation procedure for the Hölder constant, were used. These are denoted as SPAes, TPAes and MSPAes, respectively. These versions utilize an estimate h~f of the true Hölder constant hf. The first remark in Table 3 that the results are obtained with the same value of λ=1.5 (multiplicative parameter) and ν=108 (tolerance parameter).

In higher-dimensional problems, comparative results are also presented for two cases, based on the availability of the Hölder constant hg:

  • Known Hölder Constant (hg): The corresponding results are shown in Table 5 for the test problems listed in Table 4.

    In this context, Fig. 3 provides a numerical example illustrating how the RT-MSPA algorithm converges, by showing the iteration points produced by this algorithm around the global minimum.

  • Unknown Hölder Constant (hg): Table 6 presents a comparison between RT-MSPAes and the corresponding estimation-based versions of existing algorithms (RT-SPAes and RT-TPAes), where hg is estimated.

    Moreover, in this case with the estimation of hg, we observed that using ν=108 and λ=1.07 often yielded better results than λ=1.5. Notably, the RT-TPAes method failed to produce results when λ=1.07.

It should be noted that for the multivariate tests in Table 6 (unknown hg), two different values for the parameter λ (typically 1.5 and 1.07) were generally used in the estimation procedure to obtain the reported results. The symbol (/) in Table 6 indicates instances where results could not be obtained for λ=1.07, particularly for the RT-TPAes method.

Indeed, the TPA method (known hf) is often faster and requires fewer evaluations according to Table 2, when comparing with SPA and MSPA. However, even in this case, the performance of MSPA remains competitive when compared to TPA. A possible reason for this is that for certain functions with a known Hölder constant hf, the more aggressive linear approximation used by TPA is more effective at quickly locating the global minimum than the secant-based approximation of MSPA. This can be particularly true for functions where the Hölder exponent β is close to 1, making the function’s behavior more linear.

In the case where hf is unknown a priori, the situation is different (see Table 3). Also, according to Table 5 and Table 6, in the cases where hg is known and hg is unknown, the performance of MSPAes and RT-MSPAes is more competitive compared to the other algorithms.

Finally, for problems with a known Hölder constant hf, MSPA required 68.42% fewer function evaluations than SPA, and 50% fewer than TPA. It also achieved an 80% reduction in execution time compared to SPA, and 50% compared to TPA. When the Hölder constant is unknown, MSPAes maintained strong performance, requiring 78.95% fewer evaluations than SPAes, and 60% fewer than TPAes, while also reducing execution time by 75% and 55%, respectively. Similarly, RT-MSPAes demonstrated its efficiency for problems with an unknown constant hg, achieving up to 61.9% fewer function evaluations than RT-SPAes, and 85.71% fewer than RT-TPAes, along with execution time reductions of 54.55% and 68.18%, respectively.

Based on these comparative studies, the MSPA, MSPAes, RT-MSPA, and RT-MSPAes algorithms demonstrate higher efficiency compared to the other techniques evaluated.

5. Conclusion

This paper addressed the global optimization of multivariate Hölderian functions defined on a box domain in n. A key focus was the challenging yet practical case where the Hölder constant hg is unknown a priori. To tackle this class of problems, we developed and analyzed two novel algorithms: the MSPA and RT-MSPA. We provided rigorous mathematical proofs establishing the convergence guarantees for both proposed methods. To evaluate their practical efficacy, MSPA and RT-MSPA were implemented and tested on a suite of standard functions commonly used in global optimization literature. The numerical results were compared against those obtained by other well-known search methods. This comparative analysis demonstrated the viability and effectiveness of our approach, in particular, offers a competitive performance in practice. Future work could explore further adaptive strategies within the partitioning framework or extend the approach to handle different types of constraints.

Acknowledgements.

The authors would like to thank the anonymous referee for his useful comments and suggestions, which helped to improve the presentation of this paper.

References