An Efficient Polynomial Time Algorithm for a Class of Generalized Linear Multiplicative Programs with Positive Exponents
Abstract
This paper explains a region-division-linearization algorithm for solving a class of generalized linear multiplicative programs (GLMPs) with positive exponent. In this algorithm, the original nonconvex problem GLMP is transformed into a series of linear programming problems by dividing the outer space of the problem GLMP into finite polynomial rectangles. A new two-stage acceleration technique is put in place to improve the computational efficiency of the algorithm, which removes part of the region of the optimal solution without problems GLMP in outer space. In addition, the global convergence of the algorithm is discussed, and the computational complexity of the algorithm is investigated. It demonstrates that the algorithm is a complete polynomial time approximation scheme. Finally, the numerical results show that the algorithm is effective and feasible.
1. Introduction
Here, p ≥ 2, X is a nonempty bounded closed set, A ∈ ℝm×n, b ∈ ℝm, ci ∈ ℝn, di ∈ ℝ, and αi > 0. T represents the transpose of a vector (e.g., represents the transpose of a vector ci). Besides, we assume that for any x ∈ X, all make .
The problem GLMP usually has multiple nonglobal local optimal solutions and is a class of NP-hard problems [1], which can be widely used in the fields of finance optimization [2, 3], robust optimization [4], microeconomics [5], and multiobjective decision making [6, 7]. In addition, the GLMP also includes a wide range of mathematical programming categories, such as linear multiplicative programming, quadratic programming, bilinear programming, and so on. Therefore, for these and various other reasons, GLMP has caught the attention of many experts, scholars, and engineering practitioners who have studied this theory and set off a new wave of global optimization learning. With the increasing dependence of practical problems on modeling optimization, local optimization theory and global optimization algorithms have made remarkable progress. However, compared with local optimization algorithm, the theory of global optimization algorithm is still quite insufficient. There are many methods to study this kind of problems, such as level set algorithm [8], heuristic algorithm [9, 10], branch and bound algorithm [11–13], outer approximation algorithm [14], parametric simplex algorithm [15], and so on, but these methods do not give the computational complexity of the algorithm. In addition, Depetrini and Locatelli [16] considered the problem of minimizing the product of two affine functions over a polyhedron set and proposed a polynomial time approximation algorithm. Locatelli [17] presented an approximate algorithm for solving more general types of global optimization problems and deduced the computational complexity of the algorithm, but the numerical results of the algorithm are lacking. Recently, Shen and Wang [18] also proposed a full polynomial time approximation algorithm for resolving the problem GLMP globally, but there is no acceleration technique. Moreover, for a more comprehensive overview of the GLMP, we encourage the readers to go through the more detailed literature [8, 19–21].
In this paper, in order to solve the GLMP, two approximation algorithms are proposed, which is mainly by establishing a nonuniform grid; the process of solving the original problem is transformed into the process of solving a series of linear problems; it is proved that the proposed algorithm can obtain a global ε−approximation solution for GLMP. Besides, we put forward a two-stage acceleration technique to speed up Algorithm 1, which yields Algorithm 2. Then, by discussing the computational complexity of the algorithm, it is shown that the two algorithms are polynomial time approximation algorithms. Numerical experiments show that the performance of Algorithm 2 is obviously better than that of Algorithm 1, and the numerical results in Table 2 show that in solving problem 1-3, Algorithm 2 uses less CPU running time and iterations than [17, 18].
-
Algorithm 1: Original algorithm.
- (1)
Step 0 (initialization). Set ε ∈ (0,1), δ = (1 + ε)(1/ρ), F = +∞, k = 0. By using formulas (22) and (23), the ratio used for the two consecutive segments in each dimension is δ, which subdivides H into smaller rectangles. Represent the vertex of each small rectangle as ν = (ν1, ν2, …, νp), which is stored in the set Bδ.
- (2)
Step 1. Select a point ν from the Bδ, solve the linear programming problem (LPν), and let Bδ = Bδ\ν.
- (3)
Step 2. If the problem (LPν) is solvable, then D(ν) ≠ ∅, and let ; if g(ν) < F, let F = g(ν), , ; if Bδ ≠ ∅, set k = k + 1 and go to Step 1; otherwise, the algorithm terminates; let
The rest of this paper is organized as follows. In Section 2, we first transform the problem GLMP into its equivalent optimization problem EOP and give its region-decomposition-linearization technique. Section 3 presents the global ε−approximation algorithm for problem GLMP and obtains the convergence for the proposed algorithm. In Section 4, we give the computational complexity for the proposed algorithm and carry out some numerical experiments in Section 5 to verify the feasibility and effectiveness of the algorithm. The concluding section is a simple summary.
2. Equivalence Problem and Its Linearization Technique
In this section, we will give the equivalent optimization problem EOP of the problem GLMP, then give the corresponding properties by studying the objective function of the EOP, and then explain the linearization technique of the equivalent problem.
2.1. Equivalent Problems and Their Properties
In order to solve the problem GLMP, the definition of global ϵ−approximation solution is given below.
Definition 1. Let x∗ be a global optimal solution to the problem GLMP at a given precision ε ∈ (0,1). If satisfies , is referred to as the global approximation of the problem GLMP.
To obtain the global ε−approximation solution for GLMP, let , li = minx∈Xfi(x).
Theorem 1. For each i = 1,2, …, p, let , , , . Then, for each i ∈ {1,2, …, p}, let ; then, fi(x∗) ≤ ui with .
Proof. It is easy to know that for any i ∈ {1,2, …, p}, there are li ≤ fi(x∗); thus,
Therefore, and then the conclusion holds.
Next, according to Theorem 1, for each i = 1,2, …, p, provide an upper bound for every fi(x∗).
On the basis of the above definition of li and ui, define the rectangle H as follows.
Moreover, the rectangle H is also called the outer space of the GLMP. Thus, by introducing variable , the problem GLMP is equivalent to the following problem P1.
Next, the equivalence of problems GLMP and P1 is explained by Theorem 1.
Theorem 2. x∗ is a global optimal solution of problem GLMP if and only if (x∗, y∗) is an optimal solution of problem P1 and .
Proof. Let if x∗ is a global optimal solution of the problem GLMP. Then, then it is obvious that (x∗, y∗) is a feasible solution to P1. Suppose (x∗, y∗) is not an optimal solution of P1; then, there is at least one feasible solution of P1, which makes
Conversely, if (x∗, y∗) is an optimal solution for P1 and if there is a i ∈ {1,2, …, p} that makes , let , then is a feasible solution for P1 and
It is easy to understand from Theorem 2 that the problems GLMP and P1 are equivalent and have the same global optimal value.
Then, for a given y ∈ H, define the set
Then, the problem P1 is equivalent to the following equivalent optimization problem.
Theorem 3. y∗ is the global optimal solution of the problem EOP if and only if (x∗, y∗) is the optimal solution of P1 and .
Proof. Suppose (x∗, y∗) is an optimal solution of P1; then, according to Theorem 2, we can know and y∗ ∈ H. In addition, . Suppose that y∗ is not the global optimal solution of the problem EOP; there must be a such that and ; then, there must also be a such that . Then, is a feasible solution of P1; there is , which contradicts the optimality of (x∗, y∗), so the hypothesis does not hold, so y∗ is the global optimal solution of the problem.
On the other hand, if y∗ is a global optimal solution of the problem EOP, then D(y∗) ≠ ∅, and there must be a x∗ ∈ D(y∗) such that (x∗, y∗) is a feasible solution of P1. Suppose (x∗, y∗) is not the global optimal solution of the problem P1; then, there must be an optimal solution to the problem P1 such that , so and , which contradicts the fact that y∗ is the global optimal solution of the problem EOP. Therefore, (x∗, y∗) is the global optimal solution of P1, and can be obtained from Theorem 2 and then proved to be over.
Through Theorem 3, the problems EOP and P1 have the same global optimal value, so combined with Theorem 2, the problems EOP and GLMP are also equivalent. Therefore, we can solve the equivalent problem EOP instead of addressing the problem GLMP.
Next, we consider the following linear programming problem:
If D(y) ≠ ∅, the optimal solution to the problem LPy is recorded as xy, and let , ; then,
Furthermore, according to the Jensen inequality, we have
Theorem 4. Suppose x∗ ∈ X is a global optimal solution of the original problem GLMP; let ; then, and x∗ is also a global optimal solution of the problem .
Proof. Firstly, according to Theorems 2 and 3, we know that y∗ is a global optimal solution of the problem EOP. Then, by using formula (14) and the optimality of the global optimal solution y∗ of the EOP, we can see that x∗ is an optimal solution of the problem .
Next, the properties of the function g(y) over H are given by Theorem 5.
Theorem 5. For a given precision ε ∈ (0,1), let δ = (1 + ε)(1/ρ); then, for any , there is
Proof. For all , according to the definition of D(y) and δ = (1 + ε)(1/ρ) > 1, one can know .
If , for any , we have D(y) ≠ ∅; obviously, and for each i = 1,2, …, p. Thus,
Moreover, according to the definition of function g(y), ; thus,
And in combination with the formulas (17) and (18), we have
Further, through formula (19) and combined with the definition of δ, we can understand that formula (16) is formed, and formula (15) is of course also true.
If , it is clear that the inequality is established.
For all , if D(y) ≠ ∅, we have , and ; then,
Besides,
By using the definition of δ and formulas (20) and (21), one can infer that formulas (15) and (16) hold.
If D(y) = ∅ and g(y) = +∞, then formulas (15) and (16) obviously hold.
If , the problem is not solved, and for any , there is D(y) = ∅, then g(y) = +∞, so formula (15) is clearly established and the proof of the conclusion is completed.
Theorem 5 shows that for any , we can determine whether the is not empty by solving the linear programming problem and then determine whether formula (16) holds.
2.2. Linearization Techniques
The objective function of the problem EOP is still nonconvex compared to the problem GLMP. But the space H in which the variable y of the objective function is located is p dimensions. Therefore, based on the above discussion, in order to solve the EOP, for a given ε ∈ (0,1), we first split the outer space H on each dimension at a ratio of δ = (1 + ε)(1/ρ), thus producing several small rectangles.
3. Analysis of Algorithm and Its Computational Complexity
This section brings an approximate algorithm based on linearization-decomposition to solve the problem EOP. After that, the analysis of its computational complexity is proved accordingly.
3.1. Approximate Algorithm
To solve the EOP, we subdivide the external space H into a finite number of small rectangles with ratio δ and put all the vertices of these small rectangles into the set Bδ.
- (1)
Step 0 (initialization). Set ε ∈ (0,1), δ = (1 + ε)(1/ρ), F = +∞, k = 0. By using formulas (22) and (23), the ratio used for the two consecutive segments in each dimension is δ, which subdivides H into smaller rectangles. Represent the vertex of each small rectangle as ν = (ν1, ν2, …, νp), which is stored in the set Bδ.
- (2)
Step 1. Select a point ν from the Bδ, solve the linear programming problem (LPν), and let Bδ = Bδ\ν.
- (3)
Step 2. If the problem (LPν) is solvable, then D(ν) ≠ ∅, and let ; if g(ν) < F, let F = g(ν), , ; if Bδ ≠ ∅, set k = k + 1 and go to Step 1; otherwise, the algorithm terminates; let
(29)
Theorem 6. For a given precision ε ∈ (0,1), let δ = (1 + ε)(1/ρ), , and be an optimal solution of the linear programming problem . Then, Algorithm 1 will get a global ε−approximation solution for problem GLMP, i.e.,
Proof. Let
According to Theorem 1, we have
Then, formula (32) implies that , so there must be a ν∗ ∈ Bδ which makes
So, using Theorem 5 on the small rectangle [(ν∗/δ), ν∗], there will be
Thus,
Noting that , we can know
Since is the optimal solution to the linear programming problem , let
Apparently, . So, by taking advantage of the formula (16) in Theorem 5, we have
Therefore, by integrating formulas (35) and (38) and combining the δ = (1 + ε)(1/ρ), we can obtain
Remark 1. According to Theorem 6, if y∗ ∈ Bδ, then from Theorem 5, the optimal solution of the linear programming problem is exactly the global optimal solution of the original problem GLMP.
Through Theorem 6, we can see that for a given precision ε ∈ (0,1), Algorithm 1 will obtain a global ε−approximation solution to the problem GLMP. Moreover, Remark 1 also shows that if y∗ ∈ Bδ, then Algorithm 1 will find a global optimal solution of the problem GLMP exactly.
3.2. Accelerating Techniques
Algorithm 1 shows that, for any ν ∈ Bδ, it is required to solve the linear programming problem (LPν), in order to verify that the D(ν) is nonempty. Hence, the computational cost of Algorithm 1 depends on the number of points within the set Bδ, respectively. Then, the proposal of the acceleration technique will discard some points that are not necessary to consider the set Bδ and only consider the region that contains the global optimal solution of the problem EOP. The detailed process is given below.
Based on the above discussion, we will give Propositions 1 and 2 to clarify the acceleration techniques of the algorithm.
Proposition 1. The global optimal solution of the problem EOP cannot be obtained on the set if a i ∈ {1,2, …, p} makes , of which
Proof. If , then there must be , and thus there is
With Proposition 1, we generate a new rectangle Hk+1 and vertex set , i.e., for each i = 1,2, …, p, let
Well, uk+1 = [l, uk+1] with .
Moreover, the above rules may produce a small rectangular vertex set with relatively few new elements, but there is still , so we then give Proposition 2 to delete the other unconsidered elements in .
Proposition 2. If is the best known solution to the problem EOP, is the optimal solution to the linear programming problem ; for each i = 1,2, …, p, let , and define the set
Then, for any , the EOP cannot get a better solution than .
Proof. Since is the optimal solution to a linear programming problem , then there is at least one point in the set , so . For arbitrary , obviously , and thus D(ν) ≠ ∅. According to the definition of the function g(y), for each , the objective function value of the EOP meets
Next, for a given ε ∈ (0,1), δ = (1 + ε)(1/ρ), make use of Proposition 2; let
Through the expression of in (46), the set is defined as follows.
Therefore, for the convenience of narration, let . This means that in order to obtain a global ε−approximation solution for problem EOP, it is only necessary to calculate up to linear programming subproblems (LPν) to determine whether the D(ν) is not empty, which determines the function value g(ν) at each vertex . Then, by using the set , the computational efficiency of Algorithm 1 will be improved, leading to the following algorithm.
Note that the Algorithm 2 simply removes the set of vertices that do not contain a global optimal solution; therefore, it is similar to Theorem 6; Algorithm 2 will also return a global ε−approximation solution of the problem GLMP and EOP as well.
-
Algorithm 2: Improved algorithm.
- (1)
Step 0 (initialization). Set ε ∈ (0,1), δ = (1 + ε)(1/ρ). By using formulas (22) and (23), H0 = H is subdivided into smaller rectangles, such that the ratio of two consecutive segments is δ in each dimension. Represent the vertex of each small rectangle as ν = (ν1, ν2, …, νp), which is stored in the set Bδ. Let F = +∞, T = ∅, , , k = 0.
- (2)
Step 1. Select a point from the Ξk, solve the linear programming problem (LPν), and let T = T ∪ ν.
- (3)
Step 2. If the problem (LPν) is solvable, then D(ν) ≠ ∅, and let ; if g(ν) < F, let , , , . Use rules (45) and (46) to produce Hk+1 and and use formulas (49) and (50) to obtain set ; let , . If Ξk ≠ ∅, set k = k + 1 and go to Step 1; otherwise, the algorithm terminates; let
4. Analysis of Computational Complexity of the Algorithm
We first give Lemma 1 to discuss the computational complexity of the two algorithms.
Lemma 1. (see [22]). Let λ be the maximum of the absolute values of all the elements A, b, ci, di in problem GLMP; then, each component of any pole x0 of X can be expressed as , where 0 ≤ pj ≤ (nλ)n, 0 < q ≤ (nλ)n, j = 1,2, …, n.
Theorem 7. For a given p ≥ 2, in order to obtain a global ε−approximation solution to the problem GLMP, the upper limit of the time required for the proposed Algorithm 1 is
Proof. From the formulas (22) and (23), we can see that the maximum number of midpoint of the set Bδ is
Using the definition of in formula (52) and Lemma 1, we have
Furthermore, we also have
By means of above formulas (56) and (58), we can have
Using ε ∈ (0,1), δ = (1 + ε)(1/ρ) in Algorithm 1 and (ε/2) < ln(1 + ε) < ε, then there will be
Then, by using the above formulas (55), (60), and (61), the upper limit of the number (expressed in |Bδ|) of interior points of Bδ is
Remark 2. Propositions 1 and 2 show that we can accelerate Algorithm 1 by removing the vertices of the small rectangle that needs not be considered, which leads to Algorithm 2 that is more resource-efficient than Algorithm 1; in other words, Algorithm 2 is an improvement on Algorithm 1. Then, the upper bound of the CPU running time required by Algorithm 2 is the same as that of Algorithm 1 in the most extreme cases (where acceleration techniques always fail). Therefore, Algorithm 2 is likewise a polynomial time approximation algorithm.
5. Numerical Experiments
This section will test the performance of the algorithm through several test problems. All of our testing procedures were performed via MATLAB (2012a) on computers with Intel(R) Core(TM)i5-2320, 3.00 GHz power processor, 4.00 GB memory, and Microsoft Win7 operating system.
Problem 4. (see [20])
Problem 5. (see [19])
Obviously, Problem 5 can be transformed into the following forms:
Problem 6. (see [8])
Problem 7.
The numerical results in Tables 1 and 2 show that Algorithms 1 and 2 can effectively solve the three test problems known in the literature and get an approximate solution, so both algorithms are feasible.
Further, we do the corresponding random numerical experiments through Problem 7, which is utilized to explore the performance of the two algorithms. We determine the convergence accuracy of the algorithm to 0.05. For each set of fixed parameters (p, m, n), we run the two algorithms 10 times for numerical comparison, and the numerical results are given in Table 3. In Table 3, Avg (Std) time and Avg (Std) Iter represent the average (standard deviation) of the CPU running time and the average (standard deviation) of iterations, respectively, after the algorithm has run 10 times. Table 3 shows that the computation effect of Algorithm 2 is better than that of Algorithm 1, mainly because our acceleration technique plays a significant role by deleting the vertices of small rectangles that do not need to be considered. Hence, we believe that this acceleration technique may be generalized on other approximation algorithms such as [17, 18, 20].
Moreover, under the condition that the fixed parameters (p, m) are invariant, the CPU running time of the two algorithms will increase with the scale n of Problem 7. Under the condition that the prefixed parameters (m, n) are invariant, the CPU running time and iterations of the two algorithms will grow with the number (p) of linear functions in the objective function of Problem 7.
Problem | Reference | Optimal solution | Optimal optimum |
---|---|---|---|
1 | Locatelli [17] | (1.3148, 0.1396, 0.0000, 0.4233) | 0.890190 |
Shen and Wang [18] | (1.3148, 0.1396, 0.0000, 0.4233) | 0.890190 | |
Liu and Zhao [8] | (1.3148, 0.13955, 2.6891 × 10−14, 0.42329) | 0.890190 | |
Algorithms 1/2 | (1.3148, 0.1396, 0.0000, 0.4233) | 0.890190 | |
2 | Locatelli [17] | (3.000, 2.000) | 5.014514 |
Shen and Wang [18] | (3.000, 2.000) | 5.009309 | |
Liu and Zhao [8] | (3.000, 2.000) | 5.009309 | |
Algorithm 1 | (3.000, 2.000) | 5.009309 | |
Algorithm 2 | (3.000, 2.000) | 5.009309 | |
3 | Liu and Zhao [8] | (1, 1) | 997.661265 |
Locatelli [17] | (1, 1) | 997.661265 | |
Shen and Wang [18] | (1, 1) | 997.661265 | |
Algorithm 1/2 | (1, 1) | 997.661265 | |
4 | Shen and Hang [20] | (2, 8) | 10 |
Algorithm 1/2 | (2, 8) | 10 | |
5 | Zhang et al. [19] | (0.0, 8.0, 1.0, …) | 0.91235 |
Algorithm 1 | (0.0, 8.0, 1.0, …) | 0.91235 | |
Algorithm 2 | (0.0, 8.0, 1.0, …) | 0.91235 | |
6 | Liu and Zhao [8] | (1.25, 1) | 263.785989 |
Algorithm 1 | (1.25, 1) | 263.785989 | |
Algorithm 2 | (1.25, 1) | 263.785989 |
Problem | Reference | Iter | Time | ε |
---|---|---|---|---|
1 | Locatelli [17] | 404 | 9.606 | 0.05 |
Shen and Wang [18] | 3 | 0.047 | 0.05 | |
Algorithm 1/2 | 1 | 0.0149 | 0.05 | |
2 | Locatelli [17] | 69 | 2.4960 | 0.15009 |
Shen and Wang [18] | 4 | 0.0800 | 0.15009 | |
Algorithm 1 | 6 | 0.1024 | 0.15009 | |
Algorithm 2 | 4 | 0.0657 | 0.15009 | |
3 | Locatelli [17] | 5 | 1.126 | 0.2 |
Shen and Wang [18] | 4 | 0.085 | 0.2 | |
Algorithm 1/2 | 1 | 0.0116 | 0.2 | |
4 | Algorithm 1/2 | 1 | 0.0241 | 0.01 |
5 | Algorithm 1 | 797 | 47.5367 | 0.2 |
Algorithm 2 | 507 | 30.2111 | 0.2 | |
6 | Algorithm 1 | 63 | 59.4304 | 0.2 |
Algorithm 2 | 37 | 35.6072 | 0.2 |
(p, m, n) | Algorithm 1 | Algorithm 2 | ||
---|---|---|---|---|
Avg (Std) time | Avg (Std) Iter | Avg (Std) time | Avg (Std) Iter | |
(2, 10, 20) | 2.9068 (2.8062) | 75.8 (84.7700) | 1.9686 (1.9352) | 22.5 (27.3395) |
(2, 20, 20) | 2.3784 (3.1017) | 52.6 (76.8936) | 1.7129 (2.1472) | 23.4 (35.1801) |
(2, 22, 20) | 0.8663 (0.9232) | 18.1 (25.0257) | 0.6568 (0.6239) | 8 (10.0199) |
(2, 20, 30) | 6.2414 (6.3274) | 165.2 (164.0334) | 3.4923 (3.7124) | 49 (61.764) |
(2, 35, 50) | 3.9868 (4.4041) | 66.4 (78.2102) | 3.3046 (3.9017) | 32.3 (38.6886) |
(2, 45, 60) | 5.8908 (5.4016) | 129.1 (125.2481) | 3.7409 (3.3526) | 40.5 (38.1084) |
(2, 45, 100) | 6.6579 (5.9685) | 125.3 (123.7061) | 4.2665 (3.6485) | 40.2 (40.1343) |
(2, 60, 100) | 7.8626 (6.3057) | 96.6 (99.4818) | 4.5517 (2.8324) | 26 (19.8343) |
(2, 70, 100) | 9.1245 (8.1057) | 96.3 (104.6633) | 5.0942 (3.3528) | 23.6 (18.9430) |
(2, 70, 120) | 11.2742 (13.2311) | 148 (215.2185) | 6.0341 (5.5968) | 35 (37.3256) |
(2, 100, 10) | 0.1877 (0.1300) | 2.4 (2.9732) | 0.1542 (0.0663) | 1.3 (0.6403) |
(2, 100, 50) | 0.9029 (0.5392) | 8.9 (7.0632) | 0.6542 (0.3654) | 3.9 (2.7730) |
(2, 100, 100) | 9.8811 (8.0793) | 68.6 (55.5287) | 6.9462 (6.3403) | 24.1 (26.6813) |
(2, 100, 150) | 15.4331 (10.2573) | 97.4 (75.1720) | 9.8838 (6.2545) | 30.8 (22.1260) |
(2, 100, 200) | 27.1157 (25.3267) | 124.4 (130.8076) | 18.9561 (16.8612) | 49.2 (47.3810) |
(2, 100, 250) | 64.8144 (72.0125) | 285.1 (353.7955) | 40.3711 (41.0487) | 91 (103.9576) |
(2, 100, 300) | 87.5572 (100.4846) | 331 (398.8197) | 55.5067 (64.5147) | 117.2 (153.2434) |
(2, 100, 400) | 132.4251 (176.2381) | 363.9 (581.9823) | 87.0321 (97.4482) | 130.7 (169.6585) |
(2, 100, 500) | 158.4767 (183.7060) | 338.3 (493.9785) | 111.0958 (106.7086) | 133.8 (145.3470) |
(2, 100, 700) | 331.3275 (351.8534) | 414.2 (546.8741) | 272.7311 (264.9189) | 227.1 (257.8927) |
(2, 100, 1000) | 1020.9318 (880.7910) | 1063.6 (1019.7921) | 778.8913 (638.1782) | 522 (460.2479) |
(3, 100, 10) | 4.4724 (7.4341) | 59.7 (117.2502) | 3.6522 (5.7934) | 35.2 (55.7867) |
(3, 100, 50) | 90.8139 (74.9843) | 1062.4 (982.8277) | 57.8342 (55.5978) | 473.3 (564.6533) |
(4, 100, 10) | 75.4301 (189.1250) | 1509.3 (4122.7180) | 52.9122 (122.0553) | 868.1 (2203.3283) |
6. Concluding Remarks
In this paper, we mainly propose two polynomial time approximation algorithms that can be utilized to solve the problem GLMP globally, where Algorithm 2 is obtained by accelerating Algorithm 1 by the proposed acceleration technique. The numerical results show that both algorithms are effective and feasible, but the overall calculation effect of Algorithm 2 is better than that of Algorithm 1, which shows that our acceleration technique is efficient and may be extended to some approximation algorithms such as [17, 18, 20].
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This research was supported by the National Natural Science Foundation of China (grant no. 11961001), the Construction Project of First-Class Subjects in Ningxia Higher Education (NXYLXK2017B09), and the Major Proprietary Funded Project of North Minzu University (ZDZX201901).
Open Research
Data Availability
All data and models generated or used during the study are described in Section 5 of this article.