New Technique For Solving
Multivariate Global Optimization

DJAMEL AAID\(^\ast \) Özen ÖZER\(^{\ast \ast }\)

December 9, 2022; accepted: June 12, 2023; published online: July 5, 2023.

In this paper, we propose an algorithm based on the Branch and Bound method to underestimate the objective function and reductive transformation which transformed all multivariable functions into univariate functions. We also propose and demonstrate several quadratic lower bound functions, which are better/preferable to the ones mentioned in the literature. In this regard, our experimental results are more effective when we face different nonconvex functions.

MSC. 90C26; 90C30.

Keywords. global optimization; branch and bound method; convex underestimation; piecewise quadratic; explicit solution.

\(^\ast \)Batna 2 University, Algeria, e-mail: djamelaaid@gmail.com

\(^{\ast \ast }\)Kirklareli University, Turkey, e-mail: ozenozer39@gmail.com

1 Introduction

In convex optimization, we seek a local solution (widely enough) to determine the optimal solution [ 1 , 2 , 20 ] , when the objective of global optimization is to find the globally best solution of possibly nonlinear models, in the possible or known presence of multiple local optima. Formally, global optimization seeks global solutions of a constrained optimization model [ 19 ] . Nonlinear models are ubiquitous in many applications such as advanced engineering design, biotechnology, data analysis, environmental management, financial planning, process control, risk management, scientific modeling, etc. Their solution often requires a global search approach [ 18 , 4 , 22 , 3 , 21 ] . A variety of adaptive partition strategies has been proposed to solve global optimization models. They are based upon partition, sampling and subsequent lower with upper bounding procedures. These operations are applied iteratively to the collection of active subsets within the feasible set. In this connection, several works have been proposed among others. Adjiman et al. [ 5 ] presented the detailed implementation of the alpha BB approach and computational studies in the process design problems such as heat exchange networks, reactor-separator networks and batch design under uncertainty. Akrotirianakis and Floudas [ 7 ] presented computational results of the new class of convex underestimators embedded in a branch-and-bound framework for box constrained NLPs. They also proposed a hybrid global optimization method that includes the random-linkage stochastic approach with the aim of improving computational performance. Caratzoulas and Floudas [ 9 ] proposed novel convex underestimators for trigonometric functions. Recently, years of univariate global optimization problems have attracted common attention since they arise in many real-life applications and the obtained results can be easily generalized to the multivariate case [ 6 , 8 , 13 , 16 ] . In this paper, we propose two approaches to find a global minimum of a univariate objective function with multivariate functions. In the following, we will present our two techniques.

1. A piecewise quadratics underestimations (KBBm)

The main idea consists of constructing piecewise quadratic underestimation functions closer to the given nonconvex \(f\) in a successive reduced interval \([a_k, b_k]\) with their minimums explicitly given. Instead of using a single large square away from the objective function [ 15 ] , the determination of its minimum implies a local method [ 5 ] . We propose an explicit method of quadratic relaxation for building global optimization problems with bounded variables. This construction is based on the work of authors in [ 15 ] , using the quadratic splines. The generated quadratic programs have exactly explicit optimal solutions in each interval in the target underestimated by several quadratic splines reliable to calculate the lower bounds.

2. A Coupling Branch and Bound with Alienor method

The main idea of this technique is to transform the multivariate optimization problem into a univariate problem, using a reductive transformation in order to apply the proposed algorithm to a single dimension [ 24 ] . In this technique, we focus on the multivariate global optimization problems and generalization of previous techniques used in the Branch and Bound algorithm by underestimation of the objective it gives a limit of the directions of research and preserves the enormous advantage of the underestimation represented by the explicit solutions of the generated quadratic problems. In this way, we combine the advantages of the two methods introduced by coupling the Alienor method [ 23 ] with Branch and Bound. The advantage of this adaptation is to reduce the dimension which causes one direction of search for the minimum \(f\). Secondly, this process still allows us to apply the most effective methods designed for the one-dimensional global optimization. The structure of the rest of the paper is defined as follows: section 2 presents the two underestimators proposed in [ 15 , 5 ] . section 3 discusses the construction of a new lower bound on the objective function and describes a proposed algorithm \((KBBm)\) to solve the univariate global optimization problem with box constrained. section 4 describes a coupling (KBBm) with Alienor methods to solve a multivariate global optimization problem. section 5 also presents some numerical examples of different nonconvex objective functions while we conclude the paper in section 6.

2 Background

It can be considered the following global minimization problem:

\begin{equation} (P_{u})\left\{ \begin{array}{c} \alpha =\min f(x), \\ x\in X=[a,b],\end{array}\right. \end{equation}
1

where \(f\) is a nonconvex twice differentiable function on \(X\).

In what follows we give two underestimators developed by the authors, respectively in [ 5 , 15 ] .

2.1 Underestimator in (\(\mathbf{\alpha BB}\)) method [ 5 ]

The underestimator in \(\alpha BB\) method on the interval \([a,b]\) is determined as follows;

\begin{equation} L(x)=\tfrac {\alpha }{2}(x-a)(b-x), \end{equation}
2

where \(\alpha \geq \max \{ 0,-f^{^{\prime \prime }}(x)\} \) for all \(x\in \lbrack a,b]\).

This underestimator satisfies the following properties:

  1. It is convex (i.e., \(L^{^{\prime \prime }}(x)=f^{^{\prime \prime }}(x)+\alpha =f^{^{\prime \prime }}(x)+\max \{ 0,-f^{^{\prime \prime }}(x)\} \geq 0,\)for all \(x\in \lbrack a,b].\)

  2. It coincides with the function \(f(x)\) at the endpoint of the interval \([a,b]\).

  3. It is an underestimator of the objective function \(f(x)\).

  4. To determine the values of the lower bound of the objective function \(f (x) \), solving the convex problem \(\min L (x)\) are required for all \(x\in \lbrack a,b] \) (for more details, see [ 5 ] ).

2.2 Quadratic underestimator in (\(\mathbf{KBB}\)) method [ 15 ]

The quadratic underestimator developed in [ 15 ] on the interval \([a,b]\) is given by

\begin{equation} q(x)=f(a)\tfrac {b-x}{b-a}+f(b)\tfrac {x-a}{b-a}-\tfrac {K}{2}(x-a)(b-x), \end{equation}
3

where \(\left\vert f^{\prime \prime }(x)\right\vert \leq K \) for all \(x\in \lbrack a,b]\).

This quadratic underestimator satisfies the following properties:

  1. it is convex (i.e. \(q^{^{\prime \prime }}(x)=K\geq 0,\) for all \(x\in \lbrack a,b]\)).

  2. It coincides with the function \(f (x) \) at the endpoint of the interval \( [a, b] \).

  3. It is an underestimator of the objective function \(f(x)\).

  4. The values of the lower bound are given explicitly. For more details, see [ 15 ] .

2.3 Advantages and disadvantages of two methods

  1. The advantage of \(\alpha BB\) is the best initial lower bound obtained. Also the underestimator is close to the objective function (see table 2, table 3).

  2. The disadvantage of \(\alpha BB\) is that it uses a local method for determining the values of the lower bounds.

  3. The advantage of \(KBB\) is the values of the lower bounds given explicitly.

  4. The disadvantage of \(KBB\) is the initial lower which is very far from the optimal solution. Also the underestimator is far away from objective function (see table 2, table 3).

3 The proposed underestimator (\(\mathbf{KBB_{m}}\))

In this section, we present a new lower bound. In this lower bound, we merge the advantages of \(KBB\) and \(\alpha BB\).

Let \(X=[a,b]\) be a bounded and closed interval in the set of the real numbers \(R\), \(f\) be a continuously twice differentiable function on \(X\), \(x^{0}\) and \(x^{1}\) be two real numbers in \([a,b]\) such that \(x^{0}\leq x^{1}\) and also \(l_{0}\), \(l_{1}\) be real valued functions defined by

\begin{equation} l_{0}(x)=\tfrac {x^{1}-x}{x^{1}-x^{0}}~ ~ \ {\rm if}\ x^{0}\leq {\ }x\leq x^{1},~ ~ ~ l_{1}(x)=\tfrac {x-x^{0}}{x^{1}-x^{0}}~ ~ ~ {\rm if}\ x^{0}\leq {\ }x\leq x^{1}. \end{equation}
4

for all \(x\) in the interval \([x^{0},x^{1}]. \) Thus we have \(l_{0}(x)~ +\) \( l_{1}(x)=1\). We have also defined \(l_{i}(x^{j})\) equals to \(0~ {\ }\)if \({\ }i\neq j,\) and \(1~ \)otherwise. Additionally, \(h=x^{1}-x^{0}\) and \(L_{h}f~ ~ \) are piecewise linear interpolant to \(f~ \)at points \(x^{1},x^{0}~ \) [ 10 , 12 ] such that

\begin{equation} L_{h}f(x)~ =\sum _{i=0}^{1}l_{i}(x)f(x^{i}). \end{equation}
5

It is known that \(f(x)\) is a univariate function that needs to be underestimated in the interval \([a,b]\). Suppose that the nodes are chosen to be equally spaced in \([a,b]\), so that \(x_{i}=a+ih, h=\frac{b-a}{n},~ i=0,\ldots ,n\) for every interval \([x_{i},x_{i+1}]\). We construct the corresponding local quadratic underestimator as follows

\begin{equation} p_{i}(x)=L_{h}f(x)-Q_{i}(x),~ \end{equation}
6

where \(Q_{i}(x)=\frac{1}{2}K_{i}(x-x_{i})(x_{i+1}-x)\), and \(K_{i}~ \)is an upper bound of the second derivative which is valid for \([x_{i},x_{+1}]\). Instead of considering one quadratic lower bound over \([a,b]\), we construct a piecewise quadratic lower bound.

Therefore, in the following theorem, we show that the new lower bound is tighter than the lower bound constructed in the reference [ 15 ] .

Theorem 1

Let function \(p(x)\) be a piecewise convex valid underestimator of \(f(x)\) for all \(x \in \lbrack a,b]\). A function \(p(x)\) is tighter than the underestimator \(q(x)\) introduced in the reference [ 15 ] since

\begin{equation} \label{f.7} q(x)\leq p(x)\leq f(x)\ {\rm for \; all \ } x\in \lbrack a,b], \end{equation}
7

is satisfied for \(p(x)=p_{i}(x)\) for all \(x\in [ x_{i},x_{i+1}]~ ,~ i=0,\ldots ,n\).

Proof â–¼
For every interval \([x_{i},x_{i+1}]\) it is obtained that

\begin{equation} E(x)=q(x)-p_{i}(x)=\tfrac {1}{2}(K_{i}-K)(x-x_{i})(x_{i+1}-x). \end{equation}
8


On the other hand \(E^{\prime \prime }(x)=K-K_{i}\geq 0\) for all \(x\in \lbrack x_{i}, x_{i+1}] \). Hence \(E\) is a convex function for all \(x\in \lbrack x_{i},x_{i+1}]\) so that we have

\begin{equation} E(x)\leq \max \{ E(x),x\in \lbrack x_{i},x_{i+1}]\} =E(x_{i})=E(x_{i+1})=0. \end{equation}
9

it is implied that first inequality of 7 is verified. To justify the second inequality we consider the function \(\phi \) defined on \([x_{i}, x_{i+1}] \) by 10 as follows:

\begin{equation} \label{f.3.7} \phi (x)=f(x)-p_{i}(x)=f(x)-L_{h}f(x)+\tfrac {1}{2}K_{i}(x-x_{i})(x_{i+1}-x). \end{equation}
10

It is clear that, \(\phi ^{\prime \prime }(x)=f^{\prime \prime }(x)-K_{i}\leq 0\) for all \(x\) in \(~ [x_{i}, x_{i+1}] \) and it gives us \(\phi \) is a concave function. Hence, we obtain the following inequality:

\begin{equation} \phi (x)\geq \min \{ \phi (x),~ x\in \lbrack x_{i},x_{i+1}]\} =\phi (x_{i})=\phi (x_{i+1})=0. \end{equation}
11

The second inequality of 7 is also proved.

Proof â–¼

\includegraphics[width=0.7\textwidth ]{fig1.png}

Figure 1 Comparison of our underestimator \(p(x)\) and \(q(x)\) for \(n=2\).

One has to compute a quadratic lower bound underestimator of the objective function \(f\) in each sub-interval \([x_{i},x_{i+1}],i=0,\ldots , n,\) as follows

\begin{equation} \label{f.3.9} x_{i}^{\ast }=\begin{cases} u=\frac{1}{2}(x_{i}+x_{i+1})-\frac{1}{K_{i}}\frac{f(x_{i+1})-f(x_{i})}{x_{i+1}-x_{i}}, & {\rm if }~ u\in [x_{i},x_{i+1}], \\ x_{i}, & {\rm if }~ u\leq x_{i} , \\ x_{i+1}, & {\rm if }~ u\geq x_{i+1}. \end{cases} \end{equation}
12

Now, we compute the values of \(p_{i}(x_{i}^{\ast })\) in order to detect the best lower bound we compare all lower bounds and preserve the smallest one as follows:

\begin{equation} LB^{k}=~ \min p_{i}(x_{i}^{\ast }). \end{equation}
16

The upper bound is calculated by the following comparisons and maintain the best ever. The objective function is evaluated at different points so has to determine the upper bound

\begin{equation} UB^{k}=\min \{ f(x_{i}^{\ast }),~ f(x_{i})\} . \end{equation}
17

Corollary 2

The proposed underestimator \(KBBm\) verifies the following properties:

  1. it is piecewise convex on \([a,b]\);

  2. it coincides with the function \(f(x)\) at the endpoint of the interval \([x_{i},x_{i+1}]~ \)for all \(i=1,\ldots ,n\);

  3. it is an underestimator of the objective function \(f(x)\);

  4. the values of the lower bound are given explicitly;

  5. when we double the quadratic we obtained a good lower bounds, see table 5.

The different steps for solving the problem \((P_{u})\) are summarized in the following proposed algorithm:

Algorithm

Input:

  • \(\lbrack a,b]\) (a real interval)

  • \(~ \varepsilon \) (the accuracy)

  • \(f\) (the objective function)

  • \(n\) (the number of quadratic)

Output:

  • \(x^{\ast }\) (the global minimum of \(f\))

  1. Initialization step \(k=0\)

    1. for all \(i=0,\ldots ,n\) compute \(\ x_{i}=a+\frac{b-a}{n}i\), set \(M=\bigcup _{i=0}^{n-1}\{ [x_{i},x_{i+1}]\} \)

    2. compute \(K_{i}\) s.t. \(|f^{\prime \prime }(x)|\leq K_{i} \) on each \([x_{i},x_{i+1}]\) for all \(i=1,\ldots ,n\)

    3. compute \(x_{i}^{\ast }\) by using 12 for all \(i=1,\ldots ,n\)

    4. compute \(UB^{k}=\min \{ \min f(x_{i}^{\ast }),~ \min f(x_{i})\} \)

    5. set \(\ LB^{k}=\min LB_{i}^{k}\ \) with \(LB_{i}^{k}=p_{i}(x_{i}^{\ast })\)

    6. \(\overline{~ i}\longleftarrow \) the index corresponding to \(\min LB_{i}^{k}\)

  2. Iteration step

    while \((UB^{k}-LB^{k}{\gt}\varepsilon \) and \(M\neq \emptyset )\) do

    1. \(a\longleftarrow x_{\overline{i}}\), \(b\longleftarrow x_{\overline{i}+1} \) and apply step (a), (b), (c) and (d)

    2. update \(UB^{k}\)

    3. for all \(i=1,\ldots ,m ~ (m=\operatorname {card}(M))\)

      • Elimination step: if \((UB^{k}-LB_{i}^{k}{\lt}\varepsilon )\) then remove \([x_{i},x_{i+1}]\) from \(M\)

      • Selection step: if \((UB^{k}-LB_{i}^{k}\geq \varepsilon )\) then \(\min LB_{i}^{k}\) ; \(\overline{i}\longleftarrow \) the index corresponding to \({\min }LB_{i{\ }}^{k}\)

    4. \(k=k+1\)

    end while

  3. \(x^{\ast }=x^{k}\) is the optimal solution corresponding to the best \(UB^{k}\) found.

end algorithm

Theorem 3 Convergence of the algorithm

The algorithm mentioned above is classified as follow:

  • qlgorithm is finite or

  • qlgorithm generates a bounded sequence \(\{ x_{k}\} \).

Additionally any accumulation point of the sequence is a global optimal solution of \((P_{u})\), and \(UB^k\) \(\searrow \) \(\alpha ,~ LB^k\nearrow \alpha \) are satisfied.

Proof â–¼
Assume that the algorithm is infinite. Then, it generates an infinite sequence of intervals \(\{ T^{k}\} \) whose lengths \(h_{i} \) (with \(i=1,\ldots , n\)) decreases to zero. So, the terms of sequence\(~ \{ T^{k}\} \) shrink to a singleton. Since the values of \(UB^{k} \) are obtained by evaluating \(f (x) \) in different points of \( [a, b] \), the sequence \(\{ UB^{k}\} \) is bounded below by \(\alpha =\min f (x) \). On the other hand, the values of \(LB^{k} \) are the minimum quadratic underestimate the objective function and it can not exceed \(\alpha \). Then the sequence \(\{ LB^{k}\} \) is bounded above by \(\alpha \). Subsequently, \(LB^{k}\leq \alpha \leq UB^{k}\) is verified. It suffices to prove that \(\{ UB^{k}\} \) is a decreasing sequence and \(\{ LB^{k}\} \) is a an increasing sequence. From the description of the algorithm we see that the value of \(UB^{k+1} \) is selected as the lesser between current \(UB^{k} \) and the new value to be determined which always results \(UB^{k+1}\leq UB^{k},~ \forall k\geq 0\) at each iteration \(k+1,k\geq 0\). So, \(\{ UB^{k}\} \) is a decreasing sequence. Similarly, the value of the lower bound \(LB^{k+1} ~ \) is selected as the minimum of a certain quadratic located in the interior of a big quadratic covering the current interval \([a_{k+1}, b_{k+1}], \) and underestimate the objective on the \( [a_{k+1}, b_{k+1}] \), which automatically leads \(LB^{k+1}\geq LB^{k},\forall k\geq 0\) at each iteration \(k+1,k\geq 0\). Then the sequence \(\{ LB^{k}\} \) is increasing on \( [a, b]\). Therefore the theorem is proved.
Proof â–¼

4 Coupling the Alienor method with \(\mathbf{KBB}_{\mathbf{m}}\)

The fundamental principle of the method for the reductive transformation [ 12 ] is to perform a transformation which will be returned the multidimensional problem to one dimensional problem for implementing the most effective optimization methods adapted to the case of a single variable. The basic idea is to draw all feasible by a parametric (\(\alpha \)-dense) curve continuous and fairly regular. Multivariate function is transformed into a function of a single variable. So our problem is reduced to a problem easier to solve since there is only one direction to explore.

We combine the Alienor method which is developed by the authors [ 19 ] with our algorithm (\(KBB_{m}\)).

Let us \(f\) be a nonconvex function twice differentiable on a box \(~ X=\Pi _{i=1}^{n}[a_{i},b_{i}]\) and consider the following global optimization problem

\begin{equation} (P)\left\{ \begin{array}{c} \min f(x_{1},\ldots ,x_{n}) \\ (x_{1},\ldots ,x_{n})\in X.\end{array}\right. \end{equation}
18

The idea is to transform a function of several variables to a function of a single variable. Then it becomes quite simple to determine the global minimum.

Definition 4

A subset \(S\) of \(R^{n}\) is \(\alpha \)-dense for any point \(M\) in \(R^{n}\) if there exists at least a point \(N\) in \(S\) such that \(d(M,N)\leq \alpha \), where \(d\) is the Euclidian distance in \({\mathbb R}^{n}\). Furthermore, if \(S\) is defined by \(x_{i}=h_{i}(\theta ),i=1,\ldots ,n\) then \(h(\theta )=(h_{1}(\theta ),h_{2}(\theta ),\ldots ,h_{n}(\theta ))\) is satisfied.

Now, we apply the Alienor method to global optimization [ 23 , 24 ] .

Let \(x_{1},\ldots ,x_{n}\) be \(n\) variables. So, method is to express these variables using one densifying \(X=\Pi _{i=1}^{n}[a_{i},b_{i}]\) with simple curve. Then, we construct a parametric curve.

The transformation is

\(x_{i}(\theta )=\cos (\omega _{i}\theta +\varphi _{i}),i=1,\ldots ,n\) where \((\omega _{i})~ \)and \((\varphi _{i})\) are slowly increasing sequences densifies \(I=[-1,1]^{n}\).

It is easy to extend this curve to \(X=\Pi _{i=1}^{n}[a_{i},b_{i}].\)

It is sufficient to set:

\begin{equation} x_{i}=\tfrac {1}{2}[(b_{i}-a_{i})h_{i}(\theta )+b_{i}+a_{i}], \end{equation}
19

where \(h_{i}(\theta )=\cos (\omega _{i}\theta +\varphi _{i}),i=1,\ldots ,n.\)

Hence, the original minimization problem \((P)\) is approximated by the one-dimensional minimization problem as follow

\begin{equation} (P_{t})\left\{ \begin{array}{c} \min f^{\ast }(\theta ) \\ \theta \in \lbrack 0,\theta _{\max }].\end{array}\right. \end{equation}
20

for \(f^{\ast }(\theta )=f(h_{1}(\theta ),\ldots ,h_{n}(\theta ))\).

\(\theta _{max}~ \) can be taken the largest value, for \((x_{1},x_{2},\ldots ,x_{n})\) describing all \(\Pi _{i=1}^{n}[a_{i},b_{i}]\).

Now, we can use our algorithm \((KBB_{m})\), to arrive at \(\theta ^{\ast }\) the optimal solution to the transformed problem. Finally, we must apply the “Step back” to get the approximated solution of the original problem as follows: Step back; should be found that

\begin{equation} x_{i}^{\ast }=\tfrac {1}{2}[(b_{i}-a_{i})h_{i}(\theta ^{\ast })+b_{i}+a_{i}], \end{equation}
21

where \(h_{i}(\theta ^{\ast })=\cos (\omega _{i}\theta ^{\ast }+\varphi _{i}),i=1,\ldots ,n\).

\includegraphics[scale=0.5]{fig2}
Figure 2 The Rastrigin Function for \(n=2\).
\includegraphics[scale=0.5]{fig3}
Figure 3 The transformed Rastrigin function for \(n=2\).

5 Computational Aspects and Results

To measure the performances of our \(KBB_{m}\) algorithm, we perform a comparative study with \(KBB\) and \(\alpha BB\). These algorithms are implemented in \(C\) programming language with double precision floating point by running on a computer with an Intel (R) core (TM) i3-311MCP4 with CPU 2.40GHz. Numerical tests are performed as three parts on the set of test functions. In the first experiment, we compare the performances of the \(KBB\), \(\alpha BB \) and the \(KBB_{m} \) algorithms on the set of 10 functions. Here, we include a method that computes the positive numbers \(\alpha \) and \(K\) [ 14 ] . The number of the quadratic functions are used in \(KBB_{m} \) at each iteration as fixed to \( n=16\) as well as the accuracy fixed to \(\varepsilon =10^{-6} \). In the second experiment we test the \(KBB_{m} \) algorithm according to the initial lower bound obtained for different numbers of quadratic function used on the set of \(20\) functions. In the third experiment, we compared the performance of our approach with the generalization to the case of multivariate function developed by authors in [ 17 ] through the Rastrigin function which has special features [ 11 ] .

In our results, we consider the following notations as you see in the table:

  • \(f^{\ast }\) is the optimum obtained\(,\)

  • \(LB_{0}\) is the initial lower bound\(,~ \)

  • \(T_{CPU}\) is the execution time in seconds\(,\)

  • \(m\) is the total number of interval\(,\)

  • \(m_{e}\) is the number of intervals eliminated\(,\)

  • \(LM\) is the number of local minimum\(,\)

  • \(GM\) is the number of global minimum,

  • \((*)\), an asterisk denotes that the bond is equal to the known global optimum \(f^{\ast }\) within six decimal digits of accuracy.

Exp

\(f(x)\)

\([x^{L},x^{U}]\)

\(LM\)

\(GM\)

\(opt\)

\(1\)

\(e^{-3x}-\sin ^{3}x\)

\([0,20]\)

\(4\)

\(1\)

\(-1\)

\(2\)

\(\cos x-\sin (5x)+1\)

\([0.2,7]\)

\(6\)

\(1\)

\(-0.952897\)

\(3\)

\(x+\sin (5x)\)

\([0.2,7]\)

\(7\)

\(1\)

\(-0.077590\)

\(4\)

\(e^{-x}-\sin (2\pi x)\)

\([0.2,7]\)

\(7\)

\(1\)

\(-0.478362\)

\(5\)

\(\ln (3x)\ln (2x)-0.1\)

\([0.2,7]\)

\(1\)

\(1\)

\(-0.141100\)

\(6\)

\(\sqrt{x}\sin ^{2}x\)

\([0.2,7]\)

\(3\)

\(2\)

\(0\)

\(7\)

\(2\sin x e^{-x}\)

\([0.2,7]\)

\(2\)

\(1\)

\(-0.027864\)

\(8\)

\(2\cos x+\cos \left(2x\right) +5\)

\([0.2,7]\)

\(3\)

\(2\)

\(3.5\)

\(9\)

\(\sin x\)

\([0,20]\)

\(4\)

\(3\)

\(-1\)

\(10\)

\(\sin x\cos x-1.5\sin ^{2}x+1.2\)

\([0.2,7]\)

\(3\)

\(2\)

\(-0.451388\)

\(11\)

\(\left(x-x^{2}\right) ^{2}+\left(x-1\right) ^{2}\)

\([-10,10]\)

\(1\)

\(1\)

\(0\)

\(12\)

\(\frac{x^{2}}{20}-\cos x+2\)

\([-20,20]\)

\(7\)

\(1\)

\(1\)

\(13\)

\(x^{2}-\cos \left(18x\right) \)

\([-5,5]\)

\(29\)

\(1\)

\(-1\)

\(14\)

\(e^{x^{2}}\)

\([-10,10]\)

\(1\)

\(1\)

\(1\)

\(15\)

\(\left( x+\sin x\right) e^{-x^{2}} \)

\([-10,10]\)

\(1\)

\(1\)

\(-0.824239\)

\(16\)

\(x^{4}-12x^{3}+47x^{2}-60x-20 e^{-x} \)

\([-1,7]\)

\(1\)

\(1\)

\(-32.78126\)

\(17\)

\(x^{6}-15x^{4}+27x^{2}+250\)

\([-4,4]\)

\(2\)

\(2\)

\(7\)

\(18\)

\(x^{4}-10x^{3}+35x^{2}-50x+24\)

\([-10,20]\)

\(2\)

\(2\)

\(-1\)

\(19\)

\(24x^{4}-142x^{3}+303x^{2}-276x+3\)

\([0,3]\)

\(2\)

\(1\)

\(-89\)

\(20\)

\(\cos x+2\cos \left(2x\right)e^{-x} \)

\([0.2,7]\)

\(2\)

\(1 \)

\(-0.918397\)

Table 1 Test functions.

Exp

 

\(\alpha BB\)

     
 

\(T_{cpu} \)

\(m\)

\(m_{e}\)

\(\ LB_{0}\)

\(\ f^{\ast }\)

\(1\)

\(0\)

\(47\)

\(24\)

\(-273.76041\)

\(-0.99999\)

\(2\)

\(0\)

\(17\)

\(9\)

\(-65.9109\)

\(-0.95203\)

\(3\)

\(0\)

\(31\)

\(16\)

\(-118.96147\)

\(-0.07759\)

\(4\)

\(0\)

\(15\)

\(8\)

\(-121.60896\)

\(-0.47797\)

\(5\)

\(0\)

\(11\)

\(6\)

\(-527.67986\)

\(-0.14099\)

\(6\)

\(0\)

\(13\)

\(7\)

\(-2733.29510\)

\(0.00199\)

\(7\)

\(0\)

\(13\)

\(7\)

\(-18.57601\)

\(-0.02761\)

\(8\)

\(0\)

\(11\)

\(6\)

\(-28.84495\)

\(3.56245\)

\(9\)

\(0\)

\(13\)

\(7\)

\(-46.40909\)

\(-0.99997\)

\(10\)

\(0\)

\(17\)

\(9\)

\(-29.62761\)

\(-0.45138 \)

Table 2 Computational results for 10 functions by \(\protect \alpha \)BB algorithm.

Exp

 

\(KBB\)

     
 

\(T_{cpu}\)

\(m\)

\(m_{e}\)

\(\ LB_{0}\)

\( f^{\ast }\)

\(1\)

\(2.211\)

\(55\)

\(28\)

\(-564.01754\)

\(\ \ -1\)

\(2\)

\(10.645\)

\(25\)

\(13\)

\(-116.08120\)

\(-0.95289\)

\(3\)

\(4.604 \)

\(25\)

\(13\)

\(-117.80163\)

\(-0.07758\)

\(4\)

\(3.576\)

\(57\)

\(29\)

\(-121.20354\)

\(-0.47834\)

\(5\)

\(3.312\)

\(19\)

\(10\)

\(-528.13263\)

\(-0.14110\)

\(6\)

\(2.854\)

\(29\)

\(15\)

\(-6664.14641\)

\(\ \ \ \ \ \ 0\)

\(7\)

\(3.132\)

\(67\)

\(34\)

\(-5.50269\)

\(-0.02786\)

\(8\)

\(3.012\)

\(21\)

\(11\)

\(-14.03655\)

\(\ \ \ 3.5\)

\(9\)

\(2.293\)

\(15\)

\(8\)

\(-45.19193\)

\(\ \ -1\)

\(10\)

\(3.460\)

\(27\)

\(14\)

\(-29.66644\)

\(-0.45139\)

Table 3 Computational results for 10 functions by \(KBB\) algorithm.

Exp

 

\(KBB_{m}\)

     
 

\(T_{cpu}\)

\(m\)

\(m_{e}\)

\(\ LB_{0}\)

\(\ f^{\ast }\)

\(1\)

\(0~ \)

\(144\)

\(136\)

\(-2.26691\)

\(\ \ -1\)

\(2\)

\(0~ \)

\(32\)

\(31\)

\(-0.97784\)

\(-0.9529\)

\(3\)

\(0~ \)

\(48\)

\(46\)

\(-0.09467\)

\(-0.07759\)

\(4\)

\(0\)

\(176\)

\(166\)

\(-0.65053\)

\(-0.47820\)

\(5\)

\(0\)

\(48\)

\(46\)

\(-1.73375\)

\(-0.14110\)

\(6\)

\(0\)

\(48\)

\(46\)

\(-0.23383\)

\(\ \ \ 0\)

\(7\)

\(0\)

\(112\)

\(106\)

\(-0.04618\)

\(-0.02786\)

\(8\)

\(0\)

\(48\)

\(46\)

\(3.49276\)

\(3.50001\)

\(9\)

\(0\)

\(64\)

\(61\)

\(-1.00563\)

\(\ \ -1\)

\(10\)

\(0\)

\(64\)

\(61\)

\(-0.45957\)

\(-0.45139\)

Table 4 Computational results for 10 functions by \(KBB_{m}~ \) algorithm with \(n=16\).

\(n\)

\(2\)

\(4\)

\(8\)

\(16\)

\(32\)

\(64\)

\(128 \)

\(1\)

\(-152.72\)

\(-45.16\)

\(-7.15\)

\(-2.26\)

\(\ast \)

\(\ast \)

\(\ast \)

\(2\)

\(-28.43\)

\(-7.92\)

\(-2.16\)

\(-0.97\)

\(\ast \)

\(\ast \)

\(\ast \)

\(3\)

\(-28.45\)

\(-6.17\)

\(-1.34\)

\(-0.094\)

\(\ast \)

\(\ast \)

\(\ast \)

\(4\)

\(-30.018\)

\(-8.85\)

\(-2.207\)

\(-0.65 \)

\(-0.49\)

\(\ast \)

\(\ast \)

\(5\)

\(-121.3\)

\(-29.66\)

\(-7.17\)

\(-1.73\)

\(-0.40\)

\(-0.149\)

\(-0.1417 \)

\(6\)

\(-448.19\)

\(-41.56\)

\(-3.542\)

\(-0.23\)

\(-0.0019\)

\(-0.0002\)

\(-0.00003\)

\(7\)

\(-1.307\)

\(-0.340\)

\(-0.104\)

\(-0.04\)

\(-0.03\)

\(-0.02\)

\(-0.028\)

\(8\)

\( \ 0.33\)

\(\ 2.54\)

\(3.394\)

\(3.49\)

\(\ast \)

\(\ast \)

\(\ast \)

\(9\)

\(-11.23\)

\(-3.751\)

\(-1.141\)

\(-1.005\)

\(\ast \)

\(\ast \)

\(\ast \)

\(10\)

\(-5.85\)

\(-1.98\)

\(-0.598\)

\(-0.459\)

\(-0.453\)

\(-0.452\)

\(-0.4515\)

\(11\)

\(-16118.1\)

\(-1297.05\)

\(-107.4\)

\(-9.34\)

\(-0.67 \)

\(-0.09\)

\(-0.01113\)

\(12\)

\(\ \ \ 1\)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(13\)

\(-20.42\)

\(-2.2\)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(14\)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(15\)

\(-173493.6\)

\(-27703.7\)

\(-7855.9\)

\(-769.2\)

\(-575.3\)

\(-48.21\)

\(-46.4\)

\(16\)

\(-19351.54 \)

\(-576.321\)

\(-45.152\)

\(-33.67\)

\(-32.84\)

\(-32.80\)

\(-32.789\)

\(17\)

\(-14875.91\)

\(-2957.18\)

\(-362.63\)

\(-21.88\)

\(4.71\)

\(6.82\)

\(6.98\)

\(18\)

\(-93572.1\)

\(-9016.69\)

\(-1032.09\)

\(-142.7\)

\(-27.31 \)

\(-7.07\)

\(-2.4\)

\(19\)

\(-578.6\)

\(-141.98\)

\(-95.79\)

\(-89.8\)

\(-89.1\)

\(-89.01\)

\(-89.001\)

\(20\)

\(-6.906\)

\(-2.59\)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

\(\ast \)

Table 5 \(LB_{0}\) results of values obtained by \(KBB_{m}\).

\(Exp\)

\(Alg_{1}\)

   
 

\([a_{i},b_{i}]\)

\(\ f^{\ast }\)

\(\ T_{cpu}\)

\(1\)

\([-5.12,5.12]\)

\(0.000000\)

\(2.348\)

\(2\)

\([-5.12,6.12]\)

\(0.000000\)

\(2.749\)

\(3\)

\([-3.14,2]\)

\(0.000000\)

\(1.521\)

\(4\)

\([-3.14,2.5]\)

\(0.000000\)

\(1.221\)

\(5\)

\([-10,10]\)

\(0.000000\)

\(3.049\)

\(6\)

\([-20,20]\)

\(0.000000\)

\(1.815\)

\(7\)

\([-0.5,1]\)

\(0.000000\)

\(1.399\)

\(8\)

\([-1,1]\)

\(0.000000\)

\(1.509\)

\(9\)

\([-3,9]\)

\(0.000000\)

\(1.493\)

\(10\)

\([-0.02,7]\)

\(0.000000\)

\(1.191\)

Table 6 The results given in this table are obtained in [ 6 ] .

\(Exp\)

\(Alg_{2}\)

   
 

\([a_{i},b_{i}]\)

\(\ f^{\ast }\)

\(\ T_{cpu}\)

\(1\)

\([-5.12,5.12]\)

\(0.000000\)

\(0\)

\(2\)

\([-5.12,6.12]\)

\(0.000000\)

\(0\)

\(3\)

\([-3.14,2]\)

\(0.000000\)

\(0\)

\(4\)

\([-3.14,2.5]\)

\(0.000000\)

\(0\)

\(5\)

\([-10,10]\)

\(0.000000\)

\(0\)

\(6\)

\([-20,20]\)

\(0.000000\)

\(0\)

\(7\)

\([-0.5,1]\)

\(0.000000\)

\(0\)

\(8\)

\([-1,1]\)

\(0.000000\)

\(0\)

\(9\)

\([-3,9]\)

\(0.000000\)

\(0\)

\(10\)

\([-0.02,7]\)

\(0.000000\)

\(0\)

Table 7 Results obtained by our algorithm.

The execution time required to achieve the optimal value is considered as a reliable criterion to the algorithm performances. According to the numerical results, we summarize them in table 3 and table 4. The performances of the proposed method is clearly better than the performance of the \(KBB\) method. As the best obtained initial lower bound remains an important criterion for measuring the validity of the underestimator. In table 2, table 3 and table 4, the comparative study of the quality of the initial lower bound is found by three algorithms. They show that our method is better than the two methods. table 5 just confirmed the competence of our method by doubling the number of quadratic. We can notice that the values of the lower bound are improved. table 6 and table 7, clearly show that our approach is better than the method developed in [ 17 ] in terms of execution time for multivariate global optimization. Regarding the results of the two dimensions, results encourage and promise us in terms of execution time as well.

6 Conclusion

We presented a method of underestimation of nonconvex objective based on piecewise quadratic functions which have explicit minimums. The comparison of the lower bounds favors such quadratic against others, which guarantees the underestimation of the objective. This approach is validated by considering a deterministic Branch and Bound which are fully detailed. They allow certifying still coaching the value of the global minimum at the end of the performance. However, the extension of this technique to the multidimensional case still requires the preservation of benefits already required. The coupling of Alienor method with the Branch and bound reduces the size of the problem and makes it easier to solve. This coupling has already been proved for two dimensional problems proved very effectively.problems and proved very effective. Many digital experiences are performed and confirmed the effectiveness of this new acceleration technique. The performance of the proposed procedure depends on the quality of the chosen lower bound of \(f\). Such that, our piecewise quadratics lower bounding functions is better than the two underestimators introduced and presented in [ 15 , 5 ] .


Bibliography

1

D. Aaid, A. Noui, H. A. Le Thi, A. Zidna, A modified classical algorithm ALPT4C for solving a capacitated four-index transportation problem, Acta Math. Vietnam., 37 (2012) no. 3, pp. 379–390. http://journals.math.ac.vn/acta/stories/pdf1/Vol_37_No_3/Bai6_Dja_Amel_An_Ahmed_Acta_11_51.png

2

D. Aaid, Étude numérique comparative entre des méthodes de résolution d’un problème de transport à quatre indices avec capacités, Thèse de l’Université de Constantine, 2010. http://bu.umc.edu.dz/theses/math/AAI5587.pdf

3

D. Aaid, A. Noui, M. Ouanes, A piecewise quadratic underestimation for global optimization, conference paper JSLAROMAD II Tiziouzou, Algeria, 2013.

4

D. Aaid, A. Noui, A. Zidna, M. Ouanes, A quadratic branch and bound with Alienor method for global optimization, conference paper MAGO’2014, Malaga, Spain, 2014.

5

C.S. Adjiman, I.P. Androulakis, C.A. Floudas, A global optimization method, \(\alpha \)BB, for general twice differentiable NLPs – II. Implementation and computational results, Comput. Chem. Eng., 22 (1998) no. 9, pp. 1159–1179. https://doi.org/10.1016/S0098-1354(98)00218-X \includegraphics[scale=0.1]{ext-link.png}

6

A. Guillez, Alienor fractal algorithm for multivariable minimization problems, Math. Comput. Modelling, 14 (1990), pp. 245–247. https://doi.org/10.1016/0895-7177(90)90184-O \includegraphics[scale=0.1]{ext-link.png}

7

I.G. Akrotirianakis, C.A. Floudas, Computational experience with a new class of convex underestimators: box-constrained NLP problems, J. Global Optim., 29 (2004) no. 3, pp. 249–264. https://doi.org/10.1023/B:JOGO.0000044768.75992.10 \includegraphics[scale=0.1]{ext-link.png}

8

O. Bendiab, Y. Cherruault, A new method for global optimization in two dimensions, Int. J. Bio-Med. Comput., 38 (1995) no. 1, pp. 71–73. https://doi.org/10.1016/0020-7101(94)01039-4 \includegraphics[scale=0.1]{ext-link.png}

9

S. Caratzoulas, C.A. Floudas, A trigonometric convex underestimator for the base functions in Fourier space, J. Optim. Theory Appl., 124 (2005) no. 2, pp. 339–362. https://doi.org/10.1007/s10957-004-0940-2 \includegraphics[scale=0.1]{ext-link.png}

10

P.G. Ciarlet, The Finite Element Method for Elliptic Problems, Studies in Mathematics and Its Application, 1979.

11

C. Grosan, A. Abrahamy, On a class of global optimization test functions, Neural Netw. World, 19 (2009). http://www.softcomputing.net/nnw2009.pdf

12

C. De Boor, A Practical Guide to Splines, Appl. Math. Sci., Springer Verlag, 1978.

13

G. Djaouida, A. Ziadi, Reducing transformation and global optimization, Appl. Math. Comput., 218 (2012), pp. 5848–5860. https://doi.org/10.1016/j.amc.2011.11.053 \includegraphics[scale=0.1]{ext-link.png}

14

J. Ninin, Méthodes d’optimisation globale basées sur l’analyse d’intervalle pour la résolution des problèmes avec contraintes, Thèse Doctorat, Université de Toulouse, 2010. https://theses.hal.science/tel-00580651 \includegraphics[scale=0.1]{ext-link.png}

15

H.A. Le Thi, M. Ouanes, Convex quadratic underestimation and branch and bound for univariate global optimization with one nonconvex constraint, RAIRO Oper. Res., 40 (2006), pp. 285–302. https://doi.org/10.1051/ro:2006024 \includegraphics[scale=0.1]{ext-link.png}

16

M. Rahal, A. Ziadi, A new extension of Piyavskii’s method to Holder functions of several variables, Appl. Math. Comput., 218 (2012), pp. 478–488. https://doi.org/10.1016/j.amc.2007.07.067 \includegraphics[scale=0.1]{ext-link.png}

17

M. Ouanes, H. A. Le Thi, P. N. Trong, A. Zidna, New quadratic lower bound for multivariate functions in global optimization, Math. Comput. Simulation, 109 (2015), pp. 197–211. https://doi.org/10.1016/j.matcom.2014.04.013 \includegraphics[scale=0.1]{ext-link.png}

18

M. Ouanes, The main diagonal method in C1 global optimization problem, Int. J. Appl. Math., 25 (2012) no. 5, pp. 663–672. https://www.diogenes.bg/ijam/contents/2012-25/v25_5/7.pdf \includegraphics[scale=0.1]{ext-link.png}

19

M. Ouanes, A new approach for nonconvex SIP, Int. J. Appl. Math., 81 (2012) no. 3, pp. 479–486. https://www.ijpam.eu/contents/2012-81-3/8/8.pdf \includegraphics[scale=0.1]{ext-link.png}

20

M. Ouanes, A comined descent gradient method and descritization method for convex SIP, Int. J. Appl. Math., 25 (2012) no. 4, pp.  503–513. https://www.diogenes.bg/ijam/contents/2012-25/v25_4/5.pdf \includegraphics[scale=0.1]{ext-link.png}

21

M. Ouanes, New underestimator for multivariate global optimization with box constraints, Int. J. Pure Appl. Math., 84 (2013) no. 1, pp. 73–83. http://dx.doi.org/10.12732/ijpam.v84i1.5 \includegraphics[scale=0.1]{ext-link.png}

22

A. Noui, D. Aaid, M. Ouanes, An efficient algorithm for the Bernstein polynomial approach to global optimization, conference paper JSLAROMAD II, Tiziouzou, Algeria, 2013.

23

T. Benneouala, Y. Cherruault, Alienor method for global optimization with a large number of variables, Kybernetes, 34 (2005) nos. 7-8, pp. 1104–1111. https://doi.org/10.1108/03684920510605911 \includegraphics[scale=0.1]{ext-link.png}

24

Y. Cherruault, G. Mora, Optimisation globale: théorie des courbes alpha-dense, Economica, Paris, 2005.