Volume 2021, Issue 1 8877037
Research Article
Open Access

An Efficient Polynomial Time Algorithm for a Class of Generalized Linear Multiplicative Programs with Positive Exponents

Bo Zhang

Bo Zhang

School of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China nxu.edu.cn

Search for more papers by this author
YueLin Gao

Corresponding Author

YueLin Gao

Ningxia Province Key Laboratory of Intelligent Information and Data Processing, North Minzu University, Yinchuan 750021, China nun.edu.cn

School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China nun.edu.cn

Search for more papers by this author
Xia Liu

Xia Liu

School of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China nxu.edu.cn

Search for more papers by this author
XiaoLi Huang

XiaoLi Huang

School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China nun.edu.cn

Search for more papers by this author
First published: 26 February 2021
Citations: 2
Academic Editor: Guoqiang Wang

Abstract

This paper explains a region-division-linearization algorithm for solving a class of generalized linear multiplicative programs (GLMPs) with positive exponent. In this algorithm, the original nonconvex problem GLMP is transformed into a series of linear programming problems by dividing the outer space of the problem GLMP into finite polynomial rectangles. A new two-stage acceleration technique is put in place to improve the computational efficiency of the algorithm, which removes part of the region of the optimal solution without problems GLMP in outer space. In addition, the global convergence of the algorithm is discussed, and the computational complexity of the algorithm is investigated. It demonstrates that the algorithm is a complete polynomial time approximation scheme. Finally, the numerical results show that the algorithm is effective and feasible.

1. Introduction

Consider a class of generalized linear multiplicative programs (GLMPs):
(1)

Here, p ≥ 2, X is a nonempty bounded closed set, Am×n, bm, cin, di, and αi > 0. T represents the transpose of a vector (e.g., represents the transpose of a vector ci). Besides, we assume that for any x ∈ X, all make .

The problem GLMP usually has multiple nonglobal local optimal solutions and is a class of NP-hard problems [1], which can be widely used in the fields of finance optimization [2, 3], robust optimization [4], microeconomics [5], and multiobjective decision making [6, 7]. In addition, the GLMP also includes a wide range of mathematical programming categories, such as linear multiplicative programming, quadratic programming, bilinear programming, and so on. Therefore, for these and various other reasons, GLMP has caught the attention of many experts, scholars, and engineering practitioners who have studied this theory and set off a new wave of global optimization learning. With the increasing dependence of practical problems on modeling optimization, local optimization theory and global optimization algorithms have made remarkable progress. However, compared with local optimization algorithm, the theory of global optimization algorithm is still quite insufficient. There are many methods to study this kind of problems, such as level set algorithm [8], heuristic algorithm [9, 10], branch and bound algorithm [1113], outer approximation algorithm [14], parametric simplex algorithm [15], and so on, but these methods do not give the computational complexity of the algorithm. In addition, Depetrini and Locatelli [16] considered the problem of minimizing the product of two affine functions over a polyhedron set and proposed a polynomial time approximation algorithm. Locatelli [17] presented an approximate algorithm for solving more general types of global optimization problems and deduced the computational complexity of the algorithm, but the numerical results of the algorithm are lacking. Recently, Shen and Wang [18] also proposed a full polynomial time approximation algorithm for resolving the problem GLMP globally, but there is no acceleration technique. Moreover, for a more comprehensive overview of the GLMP, we encourage the readers to go through the more detailed literature [8, 1921].

In this paper, in order to solve the GLMP, two approximation algorithms are proposed, which is mainly by establishing a nonuniform grid; the process of solving the original problem is transformed into the process of solving a series of linear problems; it is proved that the proposed algorithm can obtain a global ε−approximation solution for GLMP. Besides, we put forward a two-stage acceleration technique to speed up Algorithm 1, which yields Algorithm 2. Then, by discussing the computational complexity of the algorithm, it is shown that the two algorithms are polynomial time approximation algorithms. Numerical experiments show that the performance of Algorithm 2 is obviously better than that of Algorithm 1, and the numerical results in Table 2 show that in solving problem 1-3, Algorithm 2 uses less CPU running time and iterations than [17, 18].

    Algorithm 1: Original algorithm.
  • (1)

    Step 0 (initialization). Set ε ∈ (0,1), δ = (1 + ε)(1/ρ), F = +, k = 0. By using formulas (22) and (23), the ratio used for the two consecutive segments in each dimension is δ, which subdivides H into smaller rectangles. Represent the vertex of each small rectangle as ν = (ν1, ν2, …, νp), which is stored in the set Bδ.

  • (2)

    Step 1. Select a point ν from the Bδ, solve the linear programming problem (LPν), and let Bδ = Bδ\ν.

  • (3)

    Step 2. If the problem (LPν) is solvable, then D(ν) ≠ ∅, and let ; if g(ν) < F, let F = g(ν), , ; if Bδ ≠ ∅, set k = k + 1 and go to Step 1; otherwise, the algorithm terminates; let

The rest of this paper is organized as follows. In Section 2, we first transform the problem GLMP into its equivalent optimization problem EOP and give its region-decomposition-linearization technique. Section 3 presents the global ε−approximation algorithm for problem GLMP and obtains the convergence for the proposed algorithm. In Section 4, we give the computational complexity for the proposed algorithm and carry out some numerical experiments in Section 5 to verify the feasibility and effectiveness of the algorithm. The concluding section is a simple summary.

2. Equivalence Problem and Its Linearization Technique

In this section, we will give the equivalent optimization problem EOP of the problem GLMP, then give the corresponding properties by studying the objective function of the EOP, and then explain the linearization technique of the equivalent problem.

2.1. Equivalent Problems and Their Properties

In order to solve the problem GLMP, the definition of global ϵ−approximation solution is given below.

Definition 1. Let x be a global optimal solution to the problem GLMP at a given precision ε ∈ (0,1). If satisfies , is referred to as the global approximation of the problem GLMP.

To obtain the global ε−approximation solution for GLMP, let , li = minxXfi(x).

Theorem 1. For each i = 1,2, …, p, let , , , . Then, for each i ∈ {1,2, …, p}, let ; then, fi(x) ≤ ui with .

Proof. It is easy to know that for any i ∈ {1,2, …, p}, there are lifi(x); thus,

(2)

Therefore, and then the conclusion holds.

Next, according to Theorem 1, for each i = 1,2, …, p, provide an upper bound for every fi(x).

On the basis of the above definition of li and ui, define the rectangle H as follows.

(3)

Moreover, the rectangle H is also called the outer space of the GLMP. Thus, by introducing variable , the problem GLMP is equivalent to the following problem P1.

(4)

Next, the equivalence of problems GLMP and P1 is explained by Theorem 1.

Theorem 2. x is a global optimal solution of problem GLMP if and only if (x, y) is an optimal solution of problem P1 and .

Proof. Let if x is a global optimal solution of the problem GLMP. Then, then it is obvious that (x, y) is a feasible solution to P1. Suppose (x, y) is not an optimal solution of P1; then, there is at least one feasible solution of P1, which makes

(5)
which contradicts the optimality of the x, so the hypothesis does not hold, and then (x, y) is an optimal solution of P1.

Conversely, if (x, y) is an optimal solution for P1 and if there is a i ∈ {1,2, …, p} that makes , let , then is a feasible solution for P1 and

(6)
which contradicts the optimality of (x, y), so . Suppose x is not a global optimal solution of the problem GLMP; then, there must be a that makes . Let ; obviously, is a feasible solution to P1, so we have
(7)
which contradicts the optimality of (x, y). Therefore, x is the global optimal solution of the problem GLMP, which proves to be completed.

It is easy to understand from Theorem 2 that the problems GLMP and P1 are equivalent and have the same global optimal value.

Then, for a given yH, define the set

(8)
and function
(9)

Then, the problem P1 is equivalent to the following equivalent optimization problem.

(10)

Theorem 3. y is the global optimal solution of the problem EOP if and only if (x, y) is the optimal solution of P1 and .

Proof. Suppose (x, y) is an optimal solution of P1; then, according to Theorem 2, we can know and yH. In addition, . Suppose that y is not the global optimal solution of the problem EOP; there must be a such that and ; then, there must also be a such that . Then, is a feasible solution of P1; there is , which contradicts the optimality of (x, y), so the hypothesis does not hold, so y is the global optimal solution of the problem.

On the other hand, if y is a global optimal solution of the problem EOP, then D(y) ≠ ∅, and there must be a xD(y) such that (x, y) is a feasible solution of P1. Suppose (x, y) is not the global optimal solution of the problem P1; then, there must be an optimal solution to the problem P1 such that , so and , which contradicts the fact that y is the global optimal solution of the problem EOP. Therefore, (x, y) is the global optimal solution of P1, and can be obtained from Theorem 2 and then proved to be over.

Through Theorem 3, the problems EOP and P1 have the same global optimal value, so combined with Theorem 2, the problems EOP and GLMP are also equivalent. Therefore, we can solve the equivalent problem EOP instead of addressing the problem GLMP.

Next, we consider the following linear programming problem:

(11)

If D(y) ≠ ∅, the optimal solution to the problem LPy is recorded as xy, and let , ; then,

(12)

Furthermore, according to the Jensen inequality, we have

(13)
and then
(14)

Theorem 4. Suppose xX is a global optimal solution of the original problem GLMP; let ; then, and x is also a global optimal solution of the problem .

Proof. Firstly, according to Theorems 2 and 3, we know that y is a global optimal solution of the problem EOP. Then, by using formula (14) and the optimality of the global optimal solution y of the EOP, we can see that x is an optimal solution of the problem .

Next, the properties of the function g(y) over H are given by Theorem 5.

Theorem 5. For a given precision ε ∈ (0,1), let δ = (1 + ε)(1/ρ); then, for any , there is

(15)

In addition, if , the optimal solution to the problem is recorded as ; then, let ; there is also
(16)

Proof. For all , according to the definition of D(y) and δ = (1 + ε)(1/ρ) > 1, one can know .

If , for any , we have D(y) ≠ ∅; obviously, and for each i = 1,2, …, p. Thus,

(17)

Moreover, according to the definition of function g(y), ; thus,

(18)

And in combination with the formulas (17) and (18), we have

(19)

Further, through formula (19) and combined with the definition of δ, we can understand that formula (16) is formed, and formula (15) is of course also true.

If , it is clear that the inequality is established.

For all , if D(y) ≠ ∅, we have , and ; then,

(20)

Besides,

(21)

By using the definition of δ and formulas (20) and (21), one can infer that formulas (15) and (16) hold.

If D(y) = ∅ and g(y) = +, then formulas (15) and (16) obviously hold.

If , the problem is not solved, and for any , there is D(y) = ∅, then g(y) = +, so formula (15) is clearly established and the proof of the conclusion is completed.

Theorem 5 shows that for any , we can determine whether the is not empty by solving the linear programming problem and then determine whether formula (16) holds.

2.2. Linearization Techniques

The objective function of the problem EOP is still nonconvex compared to the problem GLMP. But the space H in which the variable y of the objective function is located is p dimensions. Therefore, based on the above discussion, in order to solve the EOP, for a given ε ∈ (0,1), we first split the outer space H on each dimension at a ratio of δ = (1 + ε)(1/ρ), thus producing several small rectangles.

To do this, let
(22)
where represents a non-negative integer set. Therefore, the number of these small rectangles is finite, and the set of all their vertices is
(23)
where . Obviously, for each yH, there must be a vertex (ν1, ν2, …, νp) ∈ Bδ making yi ∈ [νi, δνi], i = 1,2, …, p. Then, it can be concluded that the rectangle H can be approximated by the set Bδ.
Next, by using the set Bδ, the process of solving the problem EOP can be transformed into solving a series of subproblems. To this end, for each νBδ, we need to consider the value of the g(ν), that is, we need to determine whether the set D(ν) is not empty. According to Theorem 5, we can determine whether D(ν) is not empty by solving the linear programming problem (LPν). Therefore, for each vertex νBδ, the following linear programming subproblem needs to be solved here, that is,
(24)
On the basis of the conclusion of Theorem 5, if the problem (LPν) can be solved (its solution is recorded as xν), then
(25)
and thus
(26)

3. Analysis of Algorithm and Its Computational Complexity

This section brings an approximate algorithm based on linearization-decomposition to solve the problem EOP. After that, the analysis of its computational complexity is proved accordingly.

3.1. Approximate Algorithm

To solve the EOP, we subdivide the external space H into a finite number of small rectangles with ratio δ and put all the vertices of these small rectangles into the set Bδ.

Then, for each vertex νBδ, by solving the linear programming problem (LPν), if (LPν) is feasible and has an optimal solution xν, then D(ν) ≠ ∅, and we can obtain a feasible solution (formula (25)) of the EOP according to xν, which makes
(27)
If there is a that satisfies , then
(28)
and thus xν is a global ε−approximation solution of the problem GLMP. The specific algorithm steps are as follows.
  • (1)

    Step 0 (initialization). Set ε ∈ (0,1), δ = (1 + ε)(1/ρ), F = +, k = 0. By using formulas (22) and (23), the ratio used for the two consecutive segments in each dimension is δ, which subdivides H into smaller rectangles. Represent the vertex of each small rectangle as ν = (ν1, ν2, …, νp), which is stored in the set Bδ.

  • (2)

    Step 1. Select a point ν from the Bδ, solve the linear programming problem (LPν), and let Bδ = Bδ\ν.

  • (3)

    Step 2. If the problem (LPν) is solvable, then D(ν) ≠ ∅, and let ; if g(ν) < F, let F = g(ν), , ; if Bδ ≠ ∅, set k = k + 1 and go to Step 1; otherwise, the algorithm terminates; let

    (29)

and then , is a global ε−approximation solution to problems GLMP and EOP, respectively.

Theorem 6. For a given precision ε ∈ (0,1), let δ = (1 + ε)(1/ρ), , and be an optimal solution of the linear programming problem . Then, Algorithm 1 will get a global ε−approximation solution for problem GLMP, i.e.,

(30)
where x is the global optimal solution to the original problem GLMP.

Proof. Let

(31)

According to Theorem 1, we have

(32)

Then, formula (32) implies that , so there must be a νBδ which makes

(33)

So, using Theorem 5 on the small rectangle [(ν/δ), ν], there will be

(34)

Thus,

(35)

Noting that , we can know

(36)

Since is the optimal solution to the linear programming problem , let

(37)

Apparently, . So, by taking advantage of the formula (16) in Theorem 5, we have

(38)

Therefore, by integrating formulas (35) and (38) and combining the δ = (1 + ε)(1/ρ), we can obtain

(39)
and this proof is completed.

Remark 1. According to Theorem 6, if yBδ, then from Theorem 5, the optimal solution of the linear programming problem is exactly the global optimal solution of the original problem GLMP.

Through Theorem 6, we can see that for a given precision ε ∈ (0,1), Algorithm 1 will obtain a global ε−approximation solution to the problem GLMP. Moreover, Remark 1 also shows that if yBδ, then Algorithm 1 will find a global optimal solution of the problem GLMP exactly.

3.2. Accelerating Techniques

Algorithm 1 shows that, for any νBδ, it is required to solve the linear programming problem (LPν), in order to verify that the D(ν) is nonempty. Hence, the computational cost of Algorithm 1 depends on the number of points within the set Bδ, respectively. Then, the proposal of the acceleration technique will discard some points that are not necessary to consider the set Bδ and only consider the region that contains the global optimal solution of the problem EOP. The detailed process is given below.

If is the best known solution to the problem EOP, is the optimal solution to the linear programming problem ; for each i = 1,2, …, p, let ; obviously ; then, may be a better solution than . Well, using may be able to remove more vertices from Bδ that do not need to be explored. To give the acceleration technique for Algorithm 1, we first need to specify a necessary condition that the points in each subrectangle HkH0 = H(k ≥ 1) containing the global optimal solution of the problem EOP must be satisfied, that is,
(40)
where Hk = [l, uk], , , i = 1,2, …, p. Similarly, if δ = (1 + ε)(1/ρ) are used to segment rectangles Hk on each dimension, this will produce a limited number of small rectangles. For this purpose, let
(41)
Then, a set of vertices of a finite number of small rectangles will also be generated on a rectangular Hk, that is,
(42)
where . Clearly, and .

Based on the above discussion, we will give Propositions 1 and 2 to clarify the acceleration techniques of the algorithm.

Proposition 1. The global optimal solution of the problem EOP cannot be obtained on the set if a i ∈ {1,2, …, p} makes , of which

(43)

Proof. If , then there must be , and thus there is

(44)
which contradicts the inequality chain (40), so the conclusion is valid.

With Proposition 1, we generate a new rectangle Hk+1 and vertex set , i.e., for each i = 1,2, …, p, let

(45)
as well as
(46)

Well, uk+1 = [l, uk+1] with .

Moreover, the above rules may produce a small rectangular vertex set with relatively few new elements, but there is still , so we then give Proposition 2 to delete the other unconsidered elements in .

Proposition 2. If is the best known solution to the problem EOP, is the optimal solution to the linear programming problem ; for each i = 1,2, …, p, let , and define the set

(47)

Then, for any , the EOP cannot get a better solution than .

Proof. Since is the optimal solution to a linear programming problem , then there is at least one point in the set , so . For arbitrary , obviously , and thus D(ν) ≠ ∅. According to the definition of the function g(y), for each , the objective function value of the EOP meets

(48)
and this conclusion is proved.

Next, for a given ε ∈ (0,1), δ = (1 + ε)(1/ρ), make use of Proposition 2; let

(49)

Through the expression of in (46), the set is defined as follows.

(50)

Therefore, for the convenience of narration, let . This means that in order to obtain a global ε−approximation solution for problem EOP, it is only necessary to calculate up to linear programming subproblems (LPν) to determine whether the D(ν) is not empty, which determines the function value g(ν) at each vertex . Then, by using the set , the computational efficiency of Algorithm 1 will be improved, leading to the following algorithm.

(51)
and then , is a global ε−approximation solution to the problems GLMP and EOP, respectively.

Note that the Algorithm 2 simply removes the set of vertices that do not contain a global optimal solution; therefore, it is similar to Theorem 6; Algorithm 2 will also return a global ε−approximation solution of the problem GLMP and EOP as well.

    Algorithm 2: Improved algorithm.
  • (1)

    Step 0 (initialization). Set ε ∈ (0,1), δ = (1 + ε)(1/ρ). By using formulas (22) and (23), H0 = H is subdivided into smaller rectangles, such that the ratio of two consecutive segments is δ in each dimension. Represent the vertex of each small rectangle as ν = (ν1, ν2, …, νp), which is stored in the set Bδ. Let F = +, T = ∅, , , k = 0.

  • (2)

    Step 1. Select a point from the Ξk, solve the linear programming problem (LPν), and let T = Tν.

  • (3)

    Step 2. If the problem (LPν) is solvable, then D(ν) ≠ ∅, and let ; if g(ν) < F, let , , , . Use rules (45) and (46) to produce Hk+1 and and use formulas (49) and (50) to obtain set ; let , . If Ξk ≠ ∅, set k = k + 1 and go to Step 1; otherwise, the algorithm terminates; let

4. Analysis of Computational Complexity of the Algorithm

We first give Lemma 1 to discuss the computational complexity of the two algorithms.

Lemma 1. (see [22]). Let λ be the maximum of the absolute values of all the elements A, b, ci, di in problem GLMP; then, each component of any pole x0 of X can be expressed as , where 0 ≤ pj ≤ (nλ)n, 0 < q ≤ (nλ)n, j = 1,2, …, n.

Because for each i = 1,2, …, p, the solution to the linear programming problem li = minxXfi(x) is the pole of X, by Lemma 1, we have , where , 0 < qi ≤ (nλ)n, j = 1,2, …, n. Thus, . Moreover, let
(52)
(53)
and for the sake of the following smooth description of Theorem 7, here is defined in Theorem 1.

Theorem 7. For a given p ≥ 2, in order to obtain a global ε−approximation solution to the problem GLMP, the upper limit of the time required for the proposed Algorithm 1 is

(54)
where , , and T(m + p, n) represents the upper limit of the time used to solve a linear programming problem with m + p linear constraints and n variables at a time.

Proof. From the formulas (22) and (23), we can see that the maximum number of midpoint of the set Bδ is

(55)

Using the definition of in formula (52) and Lemma 1, we have

(56)

Furthermore, we also have

(57)
by using formula (53) and the above inequality (56). Of course, according to the definition of Mi and ui in Theorem 1, and in conjunction with , there will be
(58)

By means of above formulas (56) and (58), we can have

(59)
and thus
(60)

Using ε ∈ (0,1), δ = (1 + ε)(1/ρ) in Algorithm 1 and (ε/2) <  ln(1 + ε) < ε, then there will be

(61)

Then, by using the above formulas (55), (60), and (61), the upper limit of the number (expressed in |Bδ|) of interior points of Bδ is

(62)
in the utilized formula (55), (60), (61). From the above formula (62), we can see that the running time of Algorithm 1 is at most
(63)
when the global ε−approximation solution is obtained, and then the proof of the conclusion is completed.

Remark 2. Propositions 1 and 2 show that we can accelerate Algorithm 1 by removing the vertices of the small rectangle that needs not be considered, which leads to Algorithm 2 that is more resource-efficient than Algorithm 1; in other words, Algorithm 2 is an improvement on Algorithm 1. Then, the upper bound of the CPU running time required by Algorithm 2 is the same as that of Algorithm 1 in the most extreme cases (where acceleration techniques always fail). Therefore, Algorithm 2 is likewise a polynomial time approximation algorithm.

5. Numerical Experiments

This section will test the performance of the algorithm through several test problems. All of our testing procedures were performed via MATLAB (2012a) on computers with Intel(R) Core(TM)i5-2320, 3.00 GHz power processor, 4.00 GB memory, and Microsoft Win7 operating system.

Problem 1. (see [17, 18])

(64)

Problem 2. (see [17, 18])

(65)

Problem 3. (see [8, 17, 18])

(66)

Problem 4. (see [20])

(67)

Problem 5. (see [19])

(68)
where
(69)

Obviously, Problem 5 can be transformed into the following forms:

(70)

Problem 6. (see [8])

(71)

Problem 7.

(72)
where p ≥ 2, cin(i = 1,2, …, p) are pseudo-random numbers in [0, 1], αi(i = 1,2, …, p) are pseudo-random numbers in [0.00001, 1], di = 1, constraint matrix elements aij are generated in [−1, 1] via aij = 2∗ϖ − 1, in which ϖ are pseudo-random numbers in [0, 1], and the right-hand side values are generated via , in which βi are pseudo-random numbers in [0, 1].

The numerical results in Tables 1 and 2 show that Algorithms 1 and 2 can effectively solve the three test problems known in the literature and get an approximate solution, so both algorithms are feasible.

Further, we do the corresponding random numerical experiments through Problem 7, which is utilized to explore the performance of the two algorithms. We determine the convergence accuracy of the algorithm to 0.05. For each set of fixed parameters (p, m, n), we run the two algorithms 10 times for numerical comparison, and the numerical results are given in Table 3. In Table 3, Avg (Std) time and Avg (Std) Iter represent the average (standard deviation) of the CPU running time and the average (standard deviation) of iterations, respectively, after the algorithm has run 10 times. Table 3 shows that the computation effect of Algorithm 2 is better than that of Algorithm 1, mainly because our acceleration technique plays a significant role by deleting the vertices of small rectangles that do not need to be considered. Hence, we believe that this acceleration technique may be generalized on other approximation algorithms such as [17, 18, 20].

Moreover, under the condition that the fixed parameters (p, m) are invariant, the CPU running time of the two algorithms will increase with the scale n of Problem 7. Under the condition that the prefixed parameters (m, n) are invariant, the CPU running time and iterations of the two algorithms will grow with the number (p) of linear functions in the objective function of Problem 7.

Table 1. Comparison of results in Problems 16.
Problem Reference Optimal solution Optimal optimum
1 Locatelli [17] (1.3148, 0.1396, 0.0000, 0.4233) 0.890190
Shen and Wang [18] (1.3148, 0.1396, 0.0000, 0.4233) 0.890190
Liu and Zhao [8] (1.3148, 0.13955, 2.6891 × 10−14, 0.42329) 0.890190
Algorithms 1/2 (1.3148, 0.1396, 0.0000, 0.4233) 0.890190
  
2 Locatelli [17] (3.000, 2.000) 5.014514
Shen and Wang [18] (3.000, 2.000) 5.009309
Liu and Zhao [8] (3.000, 2.000) 5.009309
Algorithm 1 (3.000, 2.000) 5.009309
Algorithm 2 (3.000, 2.000) 5.009309
  
3 Liu and Zhao [8] (1, 1) 997.661265
Locatelli [17] (1, 1) 997.661265
Shen and Wang [18] (1, 1) 997.661265
Algorithm 1/2 (1, 1) 997.661265
  
4 Shen and Hang [20] (2, 8) 10
Algorithm 1/2 (2, 8) 10
  
5 Zhang et al. [19] (0.0, 8.0, 1.0, …) 0.91235
Algorithm 1 (0.0, 8.0, 1.0, …) 0.91235
Algorithm 2 (0.0, 8.0, 1.0, …) 0.91235
  
6 Liu and Zhao [8] (1.25, 1) 263.785989
Algorithm 1 (1.25, 1) 263.785989
Algorithm 2 (1.25, 1) 263.785989
Table 2. Comparison of results in Problems 16.
Problem Reference Iter Time ε
1 Locatelli [17] 404 9.606 0.05
Shen and Wang [18] 3 0.047 0.05
Algorithm 1/2 1 0.0149 0.05
  
2 Locatelli [17] 69 2.4960 0.15009
Shen and Wang [18] 4 0.0800 0.15009
Algorithm 1 6 0.1024 0.15009
Algorithm 2 4 0.0657 0.15009
  
3 Locatelli [17] 5 1.126 0.2
Shen and Wang [18] 4 0.085 0.2
Algorithm 1/2 1 0.0116 0.2
  
4 Algorithm 1/2 1 0.0241 0.01
  
5 Algorithm 1 797 47.5367 0.2
Algorithm 2 507 30.2111 0.2
  
6 Algorithm 1 63 59.4304 0.2
Algorithm 2 37 35.6072 0.2
Table 3. Comparison of numerical results by using Problem 7.
(p, m, n) Algorithm 1 Algorithm 2
Avg (Std) time Avg (Std) Iter Avg (Std) time Avg (Std) Iter
(2, 10, 20) 2.9068 (2.8062) 75.8 (84.7700) 1.9686 (1.9352) 22.5 (27.3395)
(2, 20, 20) 2.3784 (3.1017) 52.6 (76.8936) 1.7129 (2.1472) 23.4 (35.1801)
(2, 22, 20) 0.8663 (0.9232) 18.1 (25.0257) 0.6568 (0.6239) 8 (10.0199)
(2, 20, 30) 6.2414 (6.3274) 165.2 (164.0334) 3.4923 (3.7124) 49 (61.764)
(2, 35, 50) 3.9868 (4.4041) 66.4 (78.2102) 3.3046 (3.9017) 32.3 (38.6886)
(2, 45, 60) 5.8908 (5.4016) 129.1 (125.2481) 3.7409 (3.3526) 40.5 (38.1084)
(2, 45, 100) 6.6579 (5.9685) 125.3 (123.7061) 4.2665 (3.6485) 40.2 (40.1343)
(2, 60, 100) 7.8626 (6.3057) 96.6 (99.4818) 4.5517 (2.8324) 26 (19.8343)
(2, 70, 100) 9.1245 (8.1057) 96.3 (104.6633) 5.0942 (3.3528) 23.6 (18.9430)
(2, 70, 120) 11.2742 (13.2311) 148 (215.2185) 6.0341 (5.5968) 35 (37.3256)
(2, 100, 10) 0.1877 (0.1300) 2.4 (2.9732) 0.1542 (0.0663) 1.3 (0.6403)
(2, 100, 50) 0.9029 (0.5392) 8.9 (7.0632) 0.6542 (0.3654) 3.9 (2.7730)
(2, 100, 100) 9.8811 (8.0793) 68.6 (55.5287) 6.9462 (6.3403) 24.1 (26.6813)
(2, 100, 150) 15.4331 (10.2573) 97.4 (75.1720) 9.8838 (6.2545) 30.8 (22.1260)
(2, 100, 200) 27.1157 (25.3267) 124.4 (130.8076) 18.9561 (16.8612) 49.2 (47.3810)
(2, 100, 250) 64.8144 (72.0125) 285.1 (353.7955) 40.3711 (41.0487) 91 (103.9576)
(2, 100, 300) 87.5572 (100.4846) 331 (398.8197) 55.5067 (64.5147) 117.2 (153.2434)
(2, 100, 400) 132.4251 (176.2381) 363.9 (581.9823) 87.0321 (97.4482) 130.7 (169.6585)
(2, 100, 500) 158.4767 (183.7060) 338.3 (493.9785) 111.0958 (106.7086) 133.8 (145.3470)
(2, 100, 700) 331.3275 (351.8534) 414.2 (546.8741) 272.7311 (264.9189) 227.1 (257.8927)
(2, 100, 1000) 1020.9318 (880.7910) 1063.6 (1019.7921) 778.8913 (638.1782) 522 (460.2479)
(3, 100, 10) 4.4724 (7.4341) 59.7 (117.2502) 3.6522 (5.7934) 35.2 (55.7867)
(3, 100, 50) 90.8139 (74.9843) 1062.4 (982.8277) 57.8342 (55.5978) 473.3 (564.6533)
(4, 100, 10) 75.4301 (189.1250) 1509.3 (4122.7180) 52.9122 (122.0553) 868.1 (2203.3283)

6. Concluding Remarks

In this paper, we mainly propose two polynomial time approximation algorithms that can be utilized to solve the problem GLMP globally, where Algorithm 2 is obtained by accelerating Algorithm 1 by the proposed acceleration technique. The numerical results show that both algorithms are effective and feasible, but the overall calculation effect of Algorithm 2 is better than that of Algorithm 1, which shows that our acceleration technique is efficient and may be extended to some approximation algorithms such as [17, 18, 20].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (grant no. 11961001), the Construction Project of First-Class Subjects in Ningxia Higher Education (NXYLXK2017B09), and the Major Proprietary Funded Project of North Minzu University (ZDZX201901).

    Data Availability

    All data and models generated or used during the study are described in Section 5 of this article.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.