A new auxiliary function approach for inequality constrained global optimization problems

ABSTRACT

There exists a very rich theory for the solution of the problem (P ) [5]. One of the traditional but effective method to solve the problem (P ) is the penalty function method [8]. The penalty function method has been proposed in order to transform a constrained optimization problem to an unconstrained optimization problem. The method offers constructing a barrier on the boundary of the set of feasible solutions which is defined as D 0 := {x ∈ R n : g j (x) ≤ 0, j = 1, 2, . . . , m} and it is assumed that D 0 is not empty. In order to construct a barrier the "b(t) = − log(−t)", "b(t) = max(t, 0)" functions are used. The penalized objective function is defined as and problem (P) re-stated as (P ρ ) min x∈R n F (x, ρ), where ρ > 0 is a penalty parameter. If b(t) = max(t, 0) is in the formula (1), the penalty function is called as exact penalty function according to Zangwill [9]. It can be observed that the exact penalty function may be non-smooth. When the penalty function approach is non-smooth, one of the conventional approaches is constructing a smoothing approach. The smoothing approach is based on modifying the objective function or approximating the objective function by smooth functions [10]. In order to improve the smoothing approaches, different types of valuable techniques and algorithms are developed [11][12][13][14]. In recent years, the smoothing approaches have been used for many non-smooth problems such as minmax [15,16], exact penalty [17][18][19][20] and etc. [21].
If the problem (P ) or (P ρ ) has just one minimizer, then many local optimization methods can be used to solve with penalty method, but if it has multiple local minimizers, most of the well-known methods are not available to solve [22]. The studies on global optimization have become extensively increase among the other research areas of optimization [23,24]. There are many valuable studies on global optimization depending on deterministic, stochastic and heuristic approaches [25,26]. Most of the global optimization techniques are proposed to solve unconstrained problems, but by combining the penalty function method with a global optimization algorithm the global solution of the problem (P ) can be obtained. One of the important global optimization approaches is the auxiliary function approach which includes the Tunneling Method (Algorithm) [27], Filled Function Method [28,29], Global Descent Method [30] and Cut-Peak Function Method [31]. These methods are established on finding the lower minimizer than the current one by making a suitable modification on the objective function. The modified function is generally called as auxiliary function (Filled Function, Tunneling Function and etc.) [33]. In the next section, we give some preliminary definitions. In section 3, we introduce a new penalty function in order to transform the problem (P) into an unconstrained problem. In Section 4, we present a minimization algorithm and convergence results. In Section 5, we apply the algorithms on the important test problems. In the last section, we give some concluding remarks.

Preliminaries
We assume that the set D 0 is closed and bounded and the function f has a finite number of local minimizers in D 0 . Throughout the paper, we use x * k to denote the k−th local minimizer of f whereas by x * we mean the global minimizer.
x = n k=1 x 2 k denotes the Euclidean norm in R n . Definition 1.
[13] Let f : R n → R be a continuous function. The functionf : R n × R + → R is called a smoothing function of f (x), iff (·, β) is continuously differentiable in R n for any fixed β, and for any x ∈ R n , lim z→x,β→0f (z, β) = f (x).

A New Penalty Function
In this section, we present a new penalty approach for the problem (P). Let us define the sets D j = {x ∈ R n : g j (x) ≤ 0} for j = 1, 2, . . . , m. It can be observed that ∩ m j=1 D j = D 0 . The main idea in exact penalty function approach is to construct a barrier at the boundary of D 0 such that any local (global) solver can not find a point outside the set D 0 . Based on this idea, we define a new penalty function as where ρ > 0, x 0 ∈ D 0 and for j = 1, 2, . . . , m. Since the function χ D c j (x) is non-smooth, we apply the smoothing approach to this function in order to make it smooth. We design the following functioñ where ε > 0 and 3 ε 2 t 2 , for t = g j (x), j = 1, 2, . . . , m. By using R 1 in formula (2), the obtained smoothing function is continuously differentiable. If the following function (2) instead of R 1 , the obtained smoothing function is second order continuously differentiable. The function R i , (i = 1, 2, . . . , k) is called the smooth transition function. Now, we obtain surrogate problem (P ρ ) as follows: where Proof. Suppose that x * ∈ D 0 . Then, there exists j such that t = g j (x * ) > 0. We have two cases: Case 1. Let t ≥ ε then, we have Therefore, we obtain It can be seen that ρ is finite. If anyone chooses ρ 2 > ρ, the ∇F (x * , ρ 2 , ε) = 0. As a consequence, if anyone chose the parameter ρ in (1) as ρ > max{ρ 1 , ρ 2 }, the point x * cannot be outside of D 0 . Corollary 1. Let x * be a solution for (P ρ ) for sufficiently large ρ then x * is a solution for (P ).
Proof. From Theorem 1, we have x * ∈ D 0 . Then, we obtain This completes the proof.

Algorithms for Minimization Procedure
In this section, we propose our new algorithm to find the global optimal point by considering the problem (P ρ ). Algorithm Step 1 Determine x 0 , ρ 0 = 10, ε 0 > 0, N > 1, 0 < η < 1 and let j = 1 and go to Step 2.
Step 2 Use x j−1 as an initial point and apply one of the global optimization algorithms to solve the problem (P ρ ). Let x j is the solution.
Step 3 If x j ∈ intD 0 then stop the algorithm and x j is the optimal solution else go to Step 4.
Step 4 If x j is ε−feasible for (P ), then stop and x j is the optimal solution. Otherwise, take ρ j = N ρ j−1 , ε j = ηε j−1 and j = j + 1, then go to Step 2.
In Step 2 of algorithm x j is the global optimal solution of the problem (P ρ ) depending on the parameter ε. In order to obtain the global solution, any of the global optimization methods can be used. We use the auxiliary function based global optimization method studied in [21,33]. The Auxiliary Function Method (AFM) is very effective in terms of numerical results which is illustrated in [21]. Our auxiliary function is defined as follows:φ where α and β are real parameters. The functioñ and the function H is defined on R + and it satisfies the following properties: At Step 3 and 4, the feasibility of the solution is checked and the stopping conditions are declared. In order to guarantee that the algorithm is worked straightly, we prove the following theorems.
Theorem 2. Assume that the sequence {x j } is produced by the Algorithm has a limit point x * , then x * ∈ D 0 .
Proof. Assume x * is a limit point of {x j }. Then there exists set J ⊂ N, such that x j → x for j ∈ J. Let us consider the contrary that x * ∈ D 0 , i.e. for sufficiently large j ∈ J, there exist δ 0 > 0 and i 0 ∈ {1, 2, . . . , m} such that: Case 1. g i 0 (x j ) ≥ δ 0 ≥ ε > 0. Since x j is the global minimum according j−th values of the parameters ρ j , ε j , for any x ∈ D 0 we have If j → ∞ then, ρ j → ∞ and ρ j x j − x 0 → ∞ (since x j ∈ D 0 and x j − x 0 > 0). Thus, f (x) takes infinite values on D 0 and it contradicts with the boundedness of f on D 0 . Case 2. t = g i 0 (x j ) ≥ ε ≥ δ 0 > 0. Since x j is the global minimum according to j−th values of the parameters ρ j , ε j , for any x ∈ D 0 we have takes infinite values on D 0 and it contradicts with the boundedness of f on D 0 . From the Cases 1 and 2, we obtain the result. Let x j is generated by Algorithm when ηN < 1. If {x j } has a limit point, then the limit point of x j is the solution for (P ).
Proof. Let x * be a limit point of {x j }. From Theorem 2, we have x * ∈ D 0 . Then, we obtain This completes the proof.

Numerical Examples
In this section, we apply our algorithm to test problems. The proposed algorithm is programmed in Matlab. Numerical results show the efficiency of this method. The detailed results are presented in the tables for all problems. For these tables, we use some symbols in order to abbreviate the expressions. The meanings of these symbols are as follows: j :The number of iterations, min f (x) = x 2 1 + x 2 2 − cos(17x 1 ) − cos(17x 2 ) + 3, We choose x 0 = (1, 1) as a starting point ρ 0 = 10, ε 0 = 0.01, η 0 = 0.1 and N = 3. The results are shown in the Table 1. Considering (P ρ ) the global minimum is obtained at a point x * = (0.7254, 0.3993) with the corresponding value 1.8376. In the paper [34], the obtained global minimum point is x * = (0.72540669, 0.3992805) with the corresponding value 1.837623. Our algorithm finds the correct point as in [34].
Problem 7. Let us consider the Example in [34] min f (x) = 10x 2 + 2x 3 + x 4 + 3x 3 + 4x 6 , We choose x 0 = (0, 0, ..., 0) as a starting point ρ 0 = 10, ε 0 = 0.01, η 0 = 0.1 and N = 4 for the Algorithm. The results are shown in the Table  7. In [34], in which three algorithms are offered for a new smoothing technique, approximate solution is found with 4, 3 and 13 iterations in the Algorithms I, II and III, respectively. Note that the solution is not found in Algorithm II of [34]. Whereas, an approximate solution is found with 4 iterations in our Algorithm.

Conclusion
In this study, we propose a new exact penalty function and a new algorithm for continuous constrained optimization. By considering this new penalty function approach, we construct a new minimization algorithm. We apply the algorithm on test problems and obtain satisfactory results.
We also propose a new smoothing approach for non-smooth penalty functions and it provides good approximations to the non-smooth penalty functions. Moreover, it is easy applicable and has easy formulation.
The results convince that the Algorithm can be used for large scale optimization problems. By applying the minimization algorithm, the optimum value is found rapidly and the algorithm presents high accuracy in finding the optimum point. We use the auxiliary function method in the algorithm as a global optimizaiton method but anyone can use any other algorithms such as DIRECT [38], Kriging-based techniques [39] or heuristic algorithms [40,41].