Approximate solution algorithm for multi-parametric non-convex programming problems with polyhedral constraints

. In this paper, we developed a novel algorithmic approach for the solution of multi-parametric non-convex programming problems with continuous decision variables. The basic idea of the proposed approach is based on successive convex relaxation of each non-convex terms and sensitivity analysis theory. The proposed algorithm is implemented using MATLAB software package and numerical examples are presented to illustrate the eﬀectiveness and applicability of the proposed method on multi-parametric non-convex programming problems with polyhedral constraints


Introduction
Variability and uncertainty emerges in all different levels of industry from the detailed process description to multi-site manufacturing.The existence of parameter variability in most real process design and operation problems necessitates the consideration of uncertainties in the mathematical programming models used to simulate and optimize the design, performance and process operations [1].Many process engineering problems involve varying parameters such as attributed to fluctuations in resource, market requirements, prices and so on, which can affect the feasibility and economics of the project [2], [3].
According to the parameters description, different solution approaches have been proposed.Among these parametric programming approach is the one which is based on the sensitivity analysis theory.Sensitivity analysis provides solutions in the neighborhood of the nominal value of the varying parameters, whereas parametric programming provides a complete map of the optimal solution in the space of the varying parameters [3].
The key advantage of multi-parametric programming approach is that the optimal solution is obtained as a function of the varying parameters without exhaustively enumerating the entire space of the varying parameters [2].
Algorithms and applications of multiparametric programming approaches have been extensively studied in the literature [1,2,4,5,6,7] but all existing algorithms are limited to convex multi-parametric programming problems even if practical problems may include nonconvex formulation.In the last decade, Pistikopous et al., [3] proposed algorithmic strategy Corresponding Author.Email: semu.mitiku@aau.edu.et.
for parametric non-convex programming problems, but in practice the proposed algorithm is costly and limited to bilinear terms in addition to linear terms in the objective function as well as in constraint sets and a single perturbation parameter.It is difficult to use the proposed algorithm, even when there exist convex nonlinear terms in the objective in addition to bilinear terms, because of the difficulty in the algorithm to compare nonlinear functions of parameters.Furthermore, the efficiency of the algorithm in [3] is highly affected when the parameters are vectors instead of single parameter.In general, existing algorithmic conditions do not hold for general multi-parametric non-convex programming problems.
In this paper we have developed a novel global optimization technique, by modifying the key procedures in [3], for solving a more general multi-parametric non-convex programming problems using sensitivity analysis theory which overcomes the limitations of the existing algorithmic methods.In particular, the solution approach effectively solves parametric problems with any twice continuously differentiable nonlinear objective function and having polyhedral constraints.
This paper is organized as follows: in section 2, some preliminary concepts such as: convexification of non-convex terms and the theory of non-linear multi-parametric programming problem with respective critical regions are described.Moreover, the major difficulties associated with obtaining global parametric solution using the existing methods have been discussed in this section.The proposed algorithm for the solution of multi-parametric non-convex programming problems with affine constraints is then described in Section 3, and illustrative examples are presented.The paper ends with conclusive remarks in Section 4.

Convex relaxation
If the optimization problem contains non-convex terms, the Karush-Kuhn-Tucker (KKT) conditions may not produce optimal solutions to such problems.To apply the KKT conditions as solution mechanism to non-convex problems, the occurrence of any non-convex term both in the objective as well as in the constraint must be underestimated by a convex envelope to approximate it by a convex function (see, for instance, [8,9,10]).
The convex envelope of bilinear terms b ij x i x j taken over the rectangle R = {(x i , x j ) : and can be found as follows: Theorem 1. [8] Let b ij , for i = 1, 2, . . ., n − 1 and j = i + 1, . . ., n, be a real number, then the convex envelope of a bilinear term b ij y i y j can be formulated as: where, Univariate concave functions can be trivially underestimated by their linearization at the lower bound of the variable range [9].Thus, the convex envelope of the concave function f (x) over the interval [x L , x U ] is the linear function of x: All other general non-convex terms for which customized lower bound do not exist are underestimated as proposed in [9].A generic non-convex function f (x) is underestimated over the entire domain [x L , x U ] by the function F (x) and defined as: where α is a positive scalar and is given by: α ≥ max{0, min x L ≤x≤x U λ i (x)}, where the λ i 's are eigenvalues of the Hessian matrix (H f (x)) of f (x).
where f , g's and h's are twice continuously differentiable in x and θ.Assume also that f is a convex function and g i 's, h j 's define a convex set.
The first-order KKT optimality conditions for (3) are given as follows: for all, i = 1, 2, . . ., p h j (x, θ) = 0, for all, j = 1, 2, . . ., q where, λ and µ are vectors of Lagrange multipliers.The main sensitivity result for (3) is derived directly from system (4) and is given in the next theorem.
In [6] Dua et al., have proposed an algorithm to solve Equ.(5) in the entire range of the varying parameters for general convex problems.This algorithm is based on approximations of the nonlinear optimal expression, x = γ ⋆ (θ) by a set of first order approximations.
Corollary 1. (First order estimation of x(θ), λ(θ), µ(θ), near θ = θ 0 [12]).Under the consideration of Theorem 3, a first order approximation of [x(θ), λ(θ), µ(θ)] in the neighborhood of θ 0 is, where The space of θ where this solution (6) remains optimal is defined as the critical region, CR, and can be obtained by using feasibility and optimality conditions.Feasibility is ensured by substituting x(θ) into the inactive inequalities given in problem (3), whereas the optimality condition is given by λ(θ) ≥ 0, where λ(θ) corresponds to the vector of active inequalities, resulting in a set of parametric constraints.Each piecewise linear approximation is confined to regions defined by feasibility and optimality conditions [6].If ǧ corresponds to the non-active constraints, and λ to the lagrangian multipliers of the active constraints: Consequently, the explicit expressions are given by a conditional piecewise linear function [6]: where C i are column vectors and K i are real matrices, whereas CR i ∈ R m are critical regions and note that CR i denotes the i th critical region.
For problems involving convex f , g and h, the parametric solutions, as described in equation ( 7) within the corresponding critical regions, are necessary and sufficient.As a result, the existing algorithms which are proposed in [1,2,4,5,6,7] work efficiently.However, for the general nonlinear case, the convexity assumption of f , g and h are usually extended to include non-convex cases as long as KKT necessary conditions are met and hence, the KKT conditions can no longer guarantee global optimality of the problem for fixed parameter θ = θ 0 .Hence, methods based on sensitivity theory for the solution of general nonlinear multi-parametric programming problems bound to local.
Recently, Pistikopoulos et al., [3] have proposed a solution strategy for special non-convex multi-parametric programming problems based on a branch-and-bound algorithm to locate the global parametric solution.It may work well when the objective function contains only affine functions with respect to x and θ explicitly, where θ is a scalar parameter.When non-linear (even polynomial) functions appear at the objective function the bounding procedure becomes unmanageable.The complexity further increases significantly, as the number of parameters increase.A further difficulty arises when there exists general non-convex terms in addition to special non-convex terms at the objective function defined in polyhedral region.
In the following section we have proposed a new algorithmic approach which can be applied to approximate the solution of multi-parametric non-convex programming problems.

Mathematical aspects of the proposed algorithm
In this section we have presented a solution approach for non-convex multi-parametric programming problems which is optimized over a convex polyhedral region.But, the algorithm can be extended to any non-linear constraint functions by taking each non-linear constraints to the objective function using penalty terms.
To proceed the presentation, consider the following multi-parametric non-convex programming problem: where, f n and c are generic non-convex and linear combination of concave functions in x, whereas cf is a convex term and g and h define a convex polyhedron.
Thus, problem (8) can be reduced to a standard non-linear programming problem by fixing a feasible value to a parameter vector, θ = θ 0 ; As discussed in Subsection 2.1, each generic nonconvex, bilinear, and concave terms can be underestimated by F (x, θ 0 ), V exR[b ij x i x j ] and CEnve respectively.Thus, the convex under-estimator of problem (9) can be formulated as: Then one can solve problem (9) locally to obtain an upper bound, Ẑ, and problem (10) to obtain a lower bound, Ž, of the exact solution of the original problem around θ 0 .If problem (10) is infeasible and its objective value is above Ẑ, we fathom the region for θ 0 .Otherwise, one compares Ẑ and Ž (note that both of them are real numbers).If the difference is greater than the pre-defined tolerance error, ǫ, the lower bound, Ž will be stored along with the solution set and the optimization variable, with the longest side from among those which contribute to the nonconvexity of the problem, will be branched as follows: and, Solving problem (9) over both sub-rectangles locally gives the upper bounds.By comparing the two upper bounds with the previous upper bound the minimum of them will be taken as a current upper bound C Ẑ.
Again problem (9) will be under-estimated inside each of the two sub-rectangles R 1 and R 2 as: Then one solves the resulting problems and stores the objective values along with the solution set, if the obtained solution is feasible and objective values are less than the upper bound Ẑ. Otherwise the respective sub-rectangle for θ = θ 0 will be fathomed.The minimum over the stored solution set, Ž, will be chosen as the lower bound and it will be compared with the current upper bound Ẑ.If Ẑ− Ž > ǫ, the lower bound will be discarded and the same procedure will be repeated as discussed above with working variable bound corresponds to the previously found minimum lower bound, (i.e., the variable box corresponding to the discarded minimum value should be further branched to obtain another lower bounds) until the difference falls below the pre-defined tolerance error, (i.e., Ẑ − Ž ≤ ǫ).
Theorem 4. The difference between Ẑ and Ž converges within finite number of branching steps.
Proof.To prove the statement of the theorem we need to show that the lower bound Ž is an increasing sequence.To this end, let Žk denote the lower bound for k th iteration (k th branching step).As a result of the above procedure, we have the current lower bound defined by: C Žk = min{min{ Ži } k−1 i=1 , Žk } and the selected minimum value has to be discarded if it does not satisfy the stopping criteria.Hence, we have large values in the solution set.Similarly, at k + 1 we have, C Žk+1 = min{min{ Ži } k i=1 \ C Žk , Žk+1 }, (since C Žk has already been discarded from the set) and as the size of the rectangular domain (like R 1 and R 2 ) decreases, the maximum separation between the original non-convex function and its respective convex under-estimator function decreases.This implies that C Žk+1 ≥ C Žk .This shows that the lower bound is increasing.
On the other hand, let Ẑk denote the upper bound for k th iteration.Again from the above mathematical procedure, we have the current upper bound, C Ẑk = min{min{ Ẑi } k−1 i=1 , Ẑk } and at the (k + 1) th step we have the current upper bound, C Ẑk+1 = min{min{ Ži } k i=1 , Ẑk+1 }, but, C Ẑk = min{ Ẑi } k i=1 .This implies, C Ẑk+1 = min{C Ẑk , Ẑk+1 }.Hence, we have a decreasing sequence of upper bounds, i.e., C Ẑk ≥ Ẑk+1 Therefore, since C Ẑk (x) decreases as k increases and C Žk (x) increases as k increases, we can conclude that the difference is a decreasing sequence and hence it converges within finite step.
If the difference falls below the given tolerance error at the k th branching step (subrectangle), the approximate solution of the k th under-estimator subproblem is taken as a KKT triple (x 0 , λ 0 , µ 0 ) for θ = θ 0 and the problem itself to define approximate parametric solution to the original problem in the neighborhood of θ 0 as; ) where, CR is a convex polyhedron which is called a critical region and can be obtained as described in Subsection 2.2 Theorem 5. Expression (13) is an approximate parametric solution of problem (8) over the corresponding critical region, CR.
If CR has not covered the parametric region, we repeat again the same mathematical procedure as in above with any new feasible parameter (θ = θ 0 ) taken from the rest of parametric regions until the parametric region has been explored successfully.
To define the rest of the parametric region, consider CR IG = [θ L , θ U ] to be the overall parametric region and let the inequalities {c 1 ≤ 0, c 2 ≤ 0, c 3 ≤ 0} define CR.Now the rest of the parametric region can be defined as CR rest = CR IG \ CR, which can be obtained by reversing the inequalities in CR one-by-one.For example, consider inequality c 1 ≤ 0, the rest of the region can be addressed by reversing the sign of inequality c 1 ≤ 0 and removing redundant constraints in CR IG , which is where, θ = (θ 1 , θ 2 ).Thus by considering the rest of the inequalities, the total of the rest region is given by, and CR rest 3 are given in Table 1.
Table 1.Definition of the rest regions Theorem 6.Let X ⊆ R m be a polyhedron and Proof.
(1) Since CR i ⊆ X for all i and CR Q ⊆ X, it is clear that CR rest CR Q ⊆ X.To show the backward inclusion let x ∈ X and assume that x / ∈ CR Q .Then, there exists an index i such that gi Theorem 6 shows that the parametric region CR IG can be explored (partitioned) within a finite feasible choice of the parameter θ = θ 0 .This indicates that the above procedure terminates after finite number of partitions of the parameter space.

Algorithm for multi-parametric non-convex programming problems
The steps of the proposed global optimization framework for a multi-parametric nonconvex programming problem with polyhedral constraints are presented as follows: Step 1: Initialize the upper bound, Ẑ of the solution as Ẑ = +∞, the optimization variable box [x L ,x U ], the parameter space [θ L ,θ U ] and the tolerance value ǫ.
Step 2: Reduce problem (8) into standard nonconvex problem by fixing the feasible parameter θ = θ 0 as: Step 3: Solve problem (14) locally within an appropriate optimization variable bound.If the solution is feasible update the current upper bound to the solution as: C Ẑ = min(current objective value, Ẑ).
Step 4: Replace each non-convex term in problem (14) by its tight convex under-estimator and solve the resulting problem.If the solution is feasible and Ž is less than Ẑ store the solution along with solution set of lower bounds; otherwise fathom the region for θ = θ 0 .
Step 5: Branch (or bisect) the optimization variable having longest side from among those which contribute to the non-convexity of the problem, as: Step 6: Solve problem (14) inside each of the two subrectangles locally to obtain upper bounds say, Ẑ2 and Ẑ3 .Now compare the obtained upper bounds with previous current upper bound C Ẑ and take the minimum one as the current upper bound C Ẑ.
Step 7: Underestimate every non-convex terms by its tight lower bounding function in each subrectangle R 1 and R 2 as discussed in Subsection 2.1 and solve the resulting convex problems.If the solutions are feasible and less than the current upper bound, store the solutions along with the solution set of lower bounds.Otherwise, fathom the respective rectangle for θ = θ 0 .
Step 8: From the solution set (in Step 7), take the minimum as Ž and compare it with current upper bound C Ẑ.If the difference C Ẑ − Ž ≤ ǫ then go to Step 9. Otherwise, discard Ž from the solution set and go to Step 5 with the sub-rectangle containing the previously found minimum lower bound.
Step 9: Compute M 0 , N 0 as discussed in Subsection 2.2 from the respective tighter underestimator subproblem.
Step 10: Characterize the parametric optimal solution x(θ), Lagrange multipliers λ(θ), µ(θ) and the critical region, where the given solution is valid and remove any redundant constraint from this region.
Step 11: Define the rest of the parameter region by reversing one by one the inequalities of the hyperplanes defining the critical region and again remove any occurrence of redundancy as discussed above.
Step 12: For each new region N R i , set CR IG = N R i and store each region as: CR rest = i N R i Step 13: Compute the Chebyshev center θ 0 and radius r of CR IG .If r ≤ 0 and CR rest is empty exit; else go to Step 2 with new θ = θ 0 which is from the rest of the parametric region.
Corollary 2. Let CR rest CR Q = X, CR Q CR i = ∅ ∀i and CR i CR j = ∅, ∀i = j be a partition of X and the difference between Ẑ and Ž is a decreasing sequence, then the above algorithm converges.
Proof.This is an immediate consequence of Theorem 4 and Theorem 6.

Example 1
Consider the following multi-parametric non-convex programming problem: Solving problem (15) using the proposed algorithm for ǫ = 10 −16 , we have got the following parametric solutions and the union of the corresponding critical regions are depicted in Figure 1 and the algorithm converges within 19.8433 CPU time.

Example 2
The second test problem is a problem with general non-convex formulation in the objective function and described in the form: Solving problem (16) using the above proposed method for ǫ = 10 −16 , one can get the following parametric solutions with corresponding critical regions and the total explored parametric region has been shown in Figure 2 and the algorithm converges within 20.6389 CPU time.

Example 3
Consider the following problem with generic non-convex formulation appeared on its objective function: After solving problem (17) using the algorithm described in this paper, we have got a parametric solution given below.The corresponding critical region is shown in Figure 3.For this problem, the algorithm converges within tolerance error ǫ = 10 −16 and 42.5103 CPU time.

Conclusion
In this paper we have given a detailed description of an approximate solution algorithm for multi-parametric non-convex programming problems with convex polyhedron constraints.The approach is constructed through successive convex relaxation of each of the non-convex terms followed by employing sensitivity analysis theory.Special attention is given to general non-convex formulations in the objective function with convex polyhedral constraints.The algorithm has been tested for a variety of example problems having a general non-convex formulation at their objective functions.The proposed algorithm can be extended to a general but twice continuously differentiable nonlinear multi-parametric programming models by making some mathematical modifications on the presented algorithm to transform nonlinear constraints into the objective part.