A Smoothing Method with Appropriate Parameter Control Based on Fischer-Burmeister Function for Second-Order Cone Complementarity Problems
Abstract
We deal with complementarity problems over second-order cones. The complementarity problem is an important class of problems in the real world and involves many optimization problems. The complementarity problem can be reformulated as a nonsmooth system of equations. Based on the smoothed Fischer-Burmeister function, we construct a smoothing Newton method for solving such a nonsmooth system. The proposed method controls a smoothing parameter appropriately. We show the global and quadratic convergence of the method. Finally, some numerical results are given.
1. Introduction
The SOCCP is a wide class of complementarity problems. For example, it involves the mixed complementarity problem (MCP) and the nonlinear complementarity problem (NCP) [1] as subclasses, since when ni = 1 for each i = 1, …, m (=n). Moreover, the second-order cone programming (SOCP) problem can be reformulated as an SOCCP by using the Karush-Kuhn-Tucker (KKT) conditions. Apart from them, some practical problems in the game theory [2, 3] and the architecture [4] can be reformulated as the SOCCP.
Much theoretical and algorithmic research has been made so far for solving the SOCCP. Fukushima et al. [5] showed that the natural residual function, also called the min function, and the Fischer-Burmeister function for the NCP can be extended to the SOCCP by using the Jordan algebra. They further constructed the smoothing functions for those SOC complementarity (C-) functions and analyzed the properties of their Jacobian matrices. Hayashi et al. [6] proposed a smoothing method based on the natural residual and showed its global and quadratic convergence. On the other hand, Chen et al. [7] proposed another smoothing method with the natural residual in which the smoothing parameter is treated as a variable in contrast to [6]. Moreover, they showed the global and quadratic convergence of their method. Similar to Chen et al., Narushima et al. [8] proposed a smoothing method treating a smoothing parameter as a variable. They used the Fischer-Burmeister function instead of the natural residual function and also provided the global and quadratic convergence of the method.
- (i)
We do not assume the special structure on the function F in SOCCP (1). In [6, 7, 9, 10], the authors focused on the following type of SOCCP:
()which is a special case of SOCCP (1) with()for some continuously differentiable function f : Rn → Rn. In [11–13], the authors studied the following type of SOCCP:()which is obtained by letting()where f and g : Rn → Rn are continuously differentiable functions. However, we assume neither (4) nor (6). Therefore, our method is applicable to a wider class of SOCCPs. - (ii)
In contrast to [8], we do not incorporate the smoothing parameter into the decision variable. We control the smoothing parameter appropriately in each iteration.
This paper is organized as follows. In Section 2, we give some preliminaries, which will be used in the subsequent analysis. In Section 3, we review the SOC C-function. In particular, we recall the property of the (smoothed) Fischer-Burmeister function. In Section 4, we propose an algorithm for solving the SOCCP and discuss its global and local convergence properties. In Section 5, we report some preliminary numerical results.
Throughout the paper, we use the following notations. Let R+ and R++ be the sets of nonnegative and positive reals. For a symmetric matrix A, we write A≽O (resp., A≻O) if A is positive semidefinite (resp., positive definite). For any x, y ∈ Rn, we write x≽y (resp., x≻y) if x − y ∈ 𝒦n (resp., x − y ∈ int 𝒦n), and we denote by 〈x, y〉 the Euclidean inner product, that is, 〈x, y〉: = x⊤y. We use the symbol ∥·∥ to denote the usual ℓ2-norm of a vector or the corresponding induced matrix norm. We often write x = (x1, x2) ∈ R × Rn−1 (possibly x2 vacuous), instead of . In addition, we often regard Rp × Rq as Rp+q. We sometimes divide a vector x ∈ Rn according to the Cartesian structure of 𝒦, that is, with . For any Fréchet-differentiable function G : Rn → Rm, we denote its transposed Jacobian matrix at x ∈ Rn by ∇G(x) ∈ Rn×m. For a given set S ⊂ Rn, int S, bd S, and conv S mean the interior, the boundary, and the convex hull of S in Rn, respectively.
2. Some Preliminaries
In this section, we recall some background materials and preliminary results used in the subsequent sections.
Property 1. There holds that
- (a)
Lxy = x∘y = y∘x = Lyx and Lx+y = Lx + Ly for any x, y ∈ Rn;
- (b)
x≽0⇔Lx≽O, and x≻0⇔Lx≻O;
- (c)
Lx is invertible whenever x ∈ int 𝒦n with
()
Property 2. For any x = (x1, x2) ∈ R × Rn−1 (n ≥ 1), let λ1(x), λ2(x) and s{1}, s{2} be the spectral values and the associated spectral vectors at x. Then the following statements hold.
- (a)
x ∈ 𝒦n⇔λ1(x) ≥ 0, x ∈ int 𝒦n⇔λ1(x) > 0, x ∈ bd 𝒦n⇔λ1(x) = 0.
- (b)
x2 = λ1(x) 2s{1} + λ2(x) 2s{2} ∈ 𝒦n.
- (c)
Let x ∈ 𝒦n. Then . Moreover, x ∈ int 𝒦n⇔x1/2 ∈ int 𝒦n, x ∈ bd 𝒦n⇔x1/2 ∈ bd 𝒦n.
In what follows, we recall some definitions for functions and matrices. The semismoothness is a generalized concept of the smoothness, which was originally introduced by Mifflin [14] for functionals, and extended to vector-valued functions by Qi and Sun [15]. For vector-valued functions associated with SOC, see also the work of Chen et al. [16]. Now we give the definition of the Clarke subdifferential [17].
Definition 1. Let H : Rn → Rm be a locally Lipschitzian function. The Clarke subdifferential of H at x is defined by
Note that if H is continuously differentiable at x, then ∂H(x) = {∇H(x)}. We next give the definitions of the semismoothness and the strong semismoothness.
Definition 2. A directionally differentiable and locally Lipschitzian function H : Rn → Rm is said to be semismooth at x if
The definitions below for a function can be found in [10, 13, 19].
Definition 3 (see [10], [13].)A function F = (F1, …, Fm) with is said to have
- (a)
the Cartesian P0-property, if for any x = (x1, …, xm), with x ≠ y, there exists an index ν ∈ {1, …, m} such that xν ≠ yν and 〈xν − yν, Fν(x) − Fν(y)〉≥0;
- (b)
the uniform Cartesian P-property, if there exists a constant ρ > 0 such that, for any x = (x1, …, xm), , there exists an index ν ∈ {1, …, m} such that 〈xν − yν, Fν(x) − Fν(y)〉≥ρ∥x−y∥2.
By the definitions, it is clear that the Cartesian P-property implies the Cartesian P0-property. Definition 3 is associated with SOCCP (3), while the following definitions are associated with SOCCP (5).
Definition 4 (see [19].)Let F : Rn → Rn and G : Rn → Rn be functions such that F = (F1, …, Fm), G = (G1, …, Gm) with and . Then, F and G are said to have
- (a)
the joint uniform Jordan P-property, if there exists a constant ρ > 0 such that
() - (b)
the joint Cartesian weak coerciveness, if there exists a vector such that
()
Next we recall the concept of linear growth of a function, which is weaker than the global Lipschitz continuity.
Definition 5 (see [19].)A function F : Rn → Rn is said to have linear growth, if there exists a constant C > 0 such that ∥F(x)∥ ≤ ∥F(0)∥ + C∥x∥ for any x ∈ Rn.
The following definitions for a matrix are originally given in [8], which is a generalization of the mixed P0-property [1].
Definition 6 (see [8].)Let M ∈ R(n+ℓ)×(2n+ℓ) be a matrix partitioned as follows:
- (a)
the Cartesian mixed P0-property, if the following statements hold:
- (1)
M3 has full column rank;
- (2)
for any x = (x1, …, xm), with (x, y) ≠ 0 and p ∈ Rℓ such that M1x + M2y + M3p = 0, there exists an index ν ∈ {1, …, m} such that (xν, yν) ≠ 0 and 〈xν, yν〉≥0;
- (1)
- (b)
the Cartesian mixed P-property, if (1) of (a) and the following statement hold:
-
(2′) for any x = (x1, …, xm), with (x, y) ≠ 0 and p ∈ Rℓ such that M1x + M2y + M3p = 0, there exists an index ν ∈ {1, …, m} such that 〈xν, yν〉>0.
-
In the case ℓ = 0 (i.e., M3 is vacuous) and M = [M1 − I], M has the Cartesian mixed P0 (P)-property if and only if M1 has the Cartesian P0 (P)-property (see [10, 13], e.g.). By the definitions, it is clear that the Cartesian mixed P-property implies the Cartesian mixed P0-property. Moreover, when n1 = ⋯ = nm = 1, the Cartesian mixed P0-property reduces to the mixed P0-property (see [1, page 1013]).
We now introduce the Cartesian mixed Jordan P0 (P)-property.
Definition 7. Let M ∈ R(n+ℓ)×(2n+ℓ) be a matrix partitioned as follows:
- (a)
the Cartesian mixed Jordan P0-property, if the following statements hold:
- (1)
M3 has full column rank;
- (2)
for any x = (x1, …, xm), with (x, y) ≠ 0 and p ∈ Rℓ such that M1x + M2y + M3p = 0, there exists an index ν ∈ {1, …, m} such that (xν, yν) ≠ 0 and xν∘yν≽0;
- (1)
- (b)
the Cartesian mixed Jordan P-property, if (1) of (a) and the following statement hold:
-
(2′) for any x = (x1, …, xm), with (x, y) ≠ 0 and p ∈ Rℓ such that M1x + M2y + M3p = 0, there exists an index ν ∈ {1, …, m} such that xν∘yν≻0.
-
Note that the relation xν∘yν≽0 (resp., xν∘yν≻0) can be rewritten as λ1(xν∘yν) ≥ 0 (resp., λ1(xν∘yν) > 0). By the definitions, it is clear that the Cartesian mixed Jordan P-property implies the Cartesian mixed Jordan P0-property, and that the Cartesian mixed Jordan P0 (P)-property implies the Cartesian mixed P0 (P)-property. Similar to the Cartesian mixed P0-property, in the case n1 = ⋯ = nm = 1, the Cartesian mixed Jordan P0-property also reduces to the mixed P0-property.
3. SOC C-Function and Its Smoothing Function
In this section, we introduce the SOC C-function and its smoothing function. In Section 3.1, we give the concept of the SOC C-function to transform the SOCCP into a system of equations. We focus on the Fischer-Burmeister function as an SOC C-function and review some properties of the smoothed Fischer-Burmeister function in Section 3.2.
3.1. SOC C-Function
First, we recall the concept of the SOC C-function.
Definition 8. A function ϕ : Rr × Rr → Rr is said to be an SOC complementarity (C-) function, if the following holds:
Let be defined as
There are many kinds of SOC C-functions. The natural residual function and the Fischer-Burmeister function are respectively defined by
In what follows, functions ϕFB, HFB, and ΨFB denote , , and with , respectively. Also, functions ϕNR, HNR, and ΨNR denote , , and with , respectively.
Recently, Bi et al. [20] showed the following inequality:
3.2. Smoothed FB Function and Its Properties
In this section, we consider the smoothing function associated with the Fischer-Burmeister function and give its properties and Jacobian matrix.
Now we review some propositions needed to establish convergence properties of the smoothing Newton method. The following proposition gives explicit expression of the transposed Jacobian matrix with t ≠ 0.
Proposition 9 (see [5].)For any t ≠ 0, Ht is continuously differentiable on R2n+ℓ, and its transposed Jacobian matrix is given by
In order to obtain the Newton step, the nonsingularity of ∇Ht is important. The next proposition establishes the nonsingularity of ∇Ht.
Proposition 10 (see [8].)Let t be an arbitrary nonzero number and let (x, y, p) ∈ R2n+ℓ be an arbitrary triple such that the Jacobian matrix ∇F(x, y, p) ⊤ has the Cartesian mixed P0-property at (x, y, p), that is, ∇pF(x, y, p) satisfies
The local Lipschitz continuity and the (strong) semismoothness of HFB play a significant role in establishing locally rapid convergence.
Proposition 11. The function HFB is locally Lipschitzian on R2n+ℓ and, moreover, is semismooth on R2n+ℓ. In addition, if ∇F is locally Lipschitzian, then HFB is strongly semismooth on R2n+ℓ.
Proof. It follows from [23, Corollary 3.3] that is globally Lipschitzian and strongly semismooth. Since F is a continuously differentiable function, HFB is locally Lipschitzian on R2n+ℓ. Also, the (strong) semismoothness of HFB can be easily shown from the strong semismoothness of and the (local Lipschitz) continuity of ∇F.
Proposition 12 (see [22].)Let (x, y, p) be any point in R2n+ℓ. Let θi(xi, yi) be any function such that Θi(xi, yi) ≤ θi(xi, yi). Let δ > 0 be given. Let be defined by
4. Smoothing Newton Method and Its Convergence Properties
In this section, we first propose an algorithm of the smoothing Newton method based on the Fischer-Burmeister function and its smoothing function. We then prove its global and Q-superlinear (Q-quadratic) convergence.
4.1. Algorithm
We provide the smoothing Newton algorithm based on the Fischer-Burmeister function. In what follows, we write v(k) = (x(k), y(k), p(k)) and z(k) = (x(k), y(k)) for simplicity.
Algorithm 13.
Step 0. Choose η, ρ ∈ (0,1), , σ ∈ (0,1/2), r > 1, κ > 0, and .
Choose v(0) = (x(0), y(0), p(0)) ∈ R2n+ℓ and β0 ∈ (0, ∞). Let t0 : = ∥HFB(v(0))∥. Set k : = 0.
Step 1. If a stopping criterion, such as ∥HFB(v(k))∥ = 0, is satisfied, then stop.
Step 2.
Step 2.0. Set and j : = 0.
Step 2.1. Compute by solving
Step 2.2. If , then let and go to Step 3. Otherwise, go to Step 2.3.
Step 2.3. Let lj be the smallest nonnegative integer l satisfying
Let and .
Step 2.4. If
Step 3. Update the parameters as follows:
Step 4. Set k : = k + 1. Go back to Step 1.
Note that the proposed algorithm consists of the outer iteration steps and the inner iteration steps. Step 2 is the inner iteration steps with the variable and the counter j, while the outer iteration steps have the variable v and the counter k.
From Step 3 of Algorithm 13 and (48), the following inequality holds:
In the rest of this section, we consider convergence properties of Algorithm 13. In Section 4.2, we prove the global convergence of the algorithm, and in Section 4.3, we investigate local behavior of the algorithm. For this purpose, we make the following assumptions.
Assumption 1.
- (A1)
The solution set 𝒮 of SOCCP (1) is nonempty and bounded.
- (A2)
The function ΨFB is level-bounded, that is, for any , the level set is bounded.
- (A3)
For any t > 0 and v ∈ R2n+ℓ, ∇Ht(v) is nonsingular.
From Proposition 10, Assumption (A3) holds if ∇F(v) ⊤ has the Cartesian mixed P0-property for any v ∈ R2n+ℓ. The following remarks correspond to SOCCPs (3) and (5).
Remark 14. The case of SOCCP (3). If ∇f(x) ⊤ has the Cartesian P0-property, then ∇F(x, y, p) ⊤ with F in (4) has the Cartesian mixed P0-property and vice versa. Note that ∇f(x) ⊤ has the Cartesian P0-property at any x ∈ Rn if f has the Cartesian P0-property (see [10, 13] for the definition of the Cartesian P0-property for a matrix).
Remark 15. The case of SOCCP (5). If ∇f(p) is nonsingular and (∇f(p) −1∇g(p)) ⊤ has the Cartesian P0-property, then ∇F(x, y, p) ⊤ with F in (6) has the Cartesian mixed P0-property.
Lemma 16. Consider SOCCP (5). Let be a function such that for any p ∈ Rn. Then is level-bounded if and only if is level-bounded.
Proof. We use below condition (55) equivalent to the level-boundedness. We first assume that is level-bounded and claim that ΨFB is level-bounded. Suppose the contrary. Then, we can find a sequence {v(k)} = {(x(k), y(k), p(k))} ⊂ R3n such that the sequence {ΨFB(v(k))} is bounded and ∥v(k)∥ → ∞. If {∥p(k)∥} is bounded, then we must have ∥(x(k), y(k))∥ → ∞. Thus, from the inequality
We next assume that ΨFB is level-bounded. Let {p(k)} ⊂ Rn be an arbitrary sequence such that ∥p(k)∥ → ∞ and let x(k) : = f(p(k)) and y(k) : = g(p(k)). Then we have
Remark 17. Consider SOCCP (3). Let be a function such that for any x ∈ Rn. Then, in the same way as in Lemma 16, we can show that is level-bounded if and only if is level-bounded (we have only to consider g(p) ≡ p).
We now provide some sufficient conditions under which Assumption (A2) holds.
Proposition 18. Consider SOCCP (5). Assume that f and g have linear growth. Assume further that f and g satisfy one of the following statements:
- (a)
f and g have the joint uniform Jordan P-property;
- (b)
f and g have the joint Cartesian weak coerciveness.
Proof. It follows from [19] that is level-bounded for each case. Thus from Lemma 16 and (29), we have desired results.
The following condition was given by Pan and Chen [13] to establish the level-boundedness property of the merit function defined in Remark 17.
Proposition 19. Consider SOCCP (3). Assume that f has the uniform Cartesian P-property and satisfies Condition A. Then ΨFB is level-bounded.
4.2. Global Convergence
In this section, we show the global convergence of Algorithm 13. We first give the well-definedness of the algorithm.
Lemma 20. Suppose that Assumption (A3) holds. Let t be any fixed positive number. Every stationary point of Ψt satisfies .
Proof. For each stationary point of Ψt, holds. Since, from t > 0 and Assumption (A3), is nonsingular, we have , and thus, .
We are now ready to show the well-definedness of the algorithm.
Proposition 21. Suppose that Assumptions (A2) and (A3) hold. Then Algorithm 13 is well-defined.
Proof. To establish the well-definedness of Algorithm 13, we only need to prove the well-definedness and the finite termination property of Step 2 at each iteration. Now we fix k and tk > 0. Since is nonsingular for any by tk > 0 and Assumption (A3), is uniquely determined for any j ≥ 0. In addition, we have
Next we prove the finite termination property of Step 2. To prove by contradiction, we assume that Step 2 never stops and then
- (i)
The case where there exists a subsequence J such that lim j∈J,j→∞τj = 0. From the boundedness of the level set of at and the line search rule (50), is bounded. In addition, from the continuous differentiability of and Assumption (A3), is also bounded. Thus, there exists a subsequence J′ ⊂ J such that
()Now τj ≠ 1 holds for all sufficiently large j ∈ J′, and hence, we have()Passing to the limit j → ∞ with j ∈ J′ on the above inequality and taking (62) into account, we have()On the other hand, it follows from (61) that , which contradicts (65). - (ii)
The case where there exists such that for all j. It follows from (50) that
()which implies holds for sufficiently large j. This contradicts (62). Therefore, the proof is complete.
In order to show the global convergence of the proposed method, we recall the mountain pass theorem (see [24], e.g.), which is as follows.
Lemma 22. Let φ : Rn → R be a continuously differentiable and level-bounded function. Let 𝒞 ⊂ Rn be a nonempty and compact set and let be the minimum value of φ on bd 𝒞, that is,
By using the mountain pass theorem, we can show the following global convergence property.
Theorem 23. Suppose that Assumptions (A1)–(A3) hold. Then, any accumulation point of the sequence {v(k)} generated by Algorithm 13 is bounded, and hence, at least one accumulation point exists, and any such point is a solution of SOCCP (1).
Proof. From the choices of tk and βk in Step 3 of Algorithm 13, tk and βk converge to zero. Since βk → 0, we have . Thus, it follows from tk → 0 and (35) that lim k→∞∥HFB(v(k))∥ = 0. It implies from the continuity of HFB that any accumulation point of the sequence {v(k)} is a solution of HFB(v) = 0, and hence, it suffices to prove the boundedness of {v(k)}. To the contrary, we assume {v(k)} is unbounded. Then there exist an index set K and a subsequence {v(k)} k∈K such that lim k∈K,k→∞ ∥v(k)∥ = ∞. Since, by Assumption (A1), the solution set 𝒮 is bounded, there exists a compact neighborhood 𝒞 of 𝒮 such that 𝒮 ⊂ int 𝒞. From the boundedness of 𝒞, v(k) ∉ 𝒞 for all k sufficiently large k ∈ K. In addition, from 𝒮 ⊂ int 𝒞, we have
Now we choose sufficiently large satisfying all the above arguments with and apply Lemma 22 with
4.3. Local Q-Superlinear and Q-Quadratic Convergence
In Section 4.2, we have shown that the sequence {v(k)} is bounded, and any accumulation point of {v(k)} is a solution of SOCCP (1). In this section, we prove that {v(k)} is superlinearly convergent, or more strongly, quadratically convergent if ∇F is locally Lipschitzian. In order to establish the superlinear (quadratic) convergence of the algorithm, we need an assumption that every accumulation point of is nonsingular. We first consider a sufficient condition for this assumption to hold.
Let {v(k)} and {tk} be the sequences generated by Algorithm 13, and let v* = (x*, y*, p*) be any accumulation point of {v(k)}. Then, by Theorem 23, v* is a solution of SOCCP (1). We call the following condition nondegeneracy of a solution of the SOCCP (see also [13, 25]).
Definition 24. Let v* = (x*, y*, p*) ∈ R2n+ℓ be a solution of SOCCP (1) with x* = (x*1, …, x*m), . Then we say that v* is nondegenerate if x* + y*≻0, or equivalently, x*i + y*i≻0 for all i = 1, …, m.
For a nondegenerate solution, we have the next lemma.
Lemma 25. Let v* = (x*, y*, p*) ∈ R2n+ℓ be a nondegenerate solution of SOCCP (1), and put zi = (xi, yi), for i = 1, …, m. Let t ∈ R be a nonzero number. Then, for each i, the following holds:
Proof. Since v* is a solution of SOCCP (1), it follows from [5, Propsition 4.2] that x*i + y*i − ((x*i) 2 + (y*i) 2) 1/2 = 0 for all i = 1, …, m. Hence, from the nondegeneracy of v*, we have
The following proposition gives a sufficient condition for the nonsingularity of accumulation points of .
Proposition 26. Suppose that Assumptions (A1)–(A3) hold. Let {v(k)} be a sequence generated by Algorithm 13 and let v* be any accumulation point of it. Moreover, assume that v* is nondegenerate and the Jacobian matrix ∇F(v*) ⊤ has the Cartesian mixed Jordan P-property, that is, rank ∇pF(v*) = ℓ and
Proof. By Theorem 23, the sequence {v(k)} is bounded and has at least one accumulation point v*. Hence, we may assume that {v(k)} converges to v* without loss of generality. It follows from Lemma 25 and tk → 0 that any accumulation point of , say J0, is given in the following form:
Next, we show the local convergence properties of the sequence {v(k)} generated by Algorithm 13. The following lemma plays a key role in proving such properties.
Lemma 27. Suppose that Assumptions (A1)–(A3) hold. Let {v(k)} be a sequence generated by Algorithm 13 and let v* be any accumulation point of it. In addition, assume that every accumulation point of is nonsingular. Then, for v(k) sufficiently close to v*,
Proof. From Theorem 23, {v(k)} is bounded, and hence, there exists at least one accumulation point, and any such point v* satisfies HFB(v*) = 0. Since every accumulation point of is nonsingular, we have, from Assumption (A3), that there exists a constant c1 > 0 such that
Using Lemma 27, we obtain the local Q-superlinear (Q-quadratic) convergence result.
Theorem 28. Suppose that Assumptions (A1)–(A3) hold. Let {v(k)} be a sequence generated by Algorithm 13. If every accumulation point of is nonsingular, then the following statements hold.
- (a)
For all k sufficiently large, is satisfied at j = 0 in Step 2.2 of Algorithm 13. Moreover, for all k sufficiently large, v(k+1) = v(k) + d(k) holds, where .
- (b)
The whole sequence {v(k)} converges Q-superlinearly to a solution v* of SOCCP (1). Moreover, if ∇F is locally Lipschitz continuous and r ≥ 2, then the sequence {v(k)} converges Q-quadratically.
Proof. Since (b) is directly obtained from (a) and Lemma 27, it suffices to prove (a). Namely, we prove that for all k sufficiently large. We have from (93) that
5. Numerical Experiments
𝒦 | n | ℓ | Our method | SDPT3 | ||
---|---|---|---|---|---|---|
Iter | CPU | Iter | CPU | |||
20 | 5 | 8.99 | 0.063 | 8.84 | 0.070 | |
50 | 10 | 8.28 | 0.058 | 9.82 | 0.078 | |
400 | 100 | 7.02 | 1.521 | 10.19 | 0.291 | |
1000 | 200 | 7.01 | 8.253 | 14.85 | 2.512 |
In order to confirm the local behaviors of the sequence generated by Algorithm 13, we list the value of at each outer iteration k in Table 2. In addition, to investigate how the parameter r affects the rate of convergence, we performed the algorithm with r = 1, 1.5, 2. We also investigate the relation between the choices of r and the behavior of {tk}; we list the behaviors of {tk}. We chose one of the above test problems in the case 𝒦 = 𝒦500 × 𝒦200 × (𝒦100) 3. We note that 2.66e − 09 means 2.66 × 10−9, for example. We see from Table 2 that the sequence generated by Algorithm 13 seems to converge Q-quadratically and the parameter r does not affect the convergence of the sequence. On the other hand, we find that the choices of r affect the behavior of {tk}.
r = 1 | r = 1.5 | r = 2 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
k | j | tk | k | j | tk | k | j | tk | |||
1 | 0 | 1.00e + 00 | 1.62e + 06 | 1 | 0 | 1.00e + 00 | 1.62e + 06 | 1 | 0 | 1.00e + 00 | 1.62e + 06 |
1 | 1 | 1.00e + 00 | 4.35e + 03 | 1 | 1 | 1.00e + 00 | 3.80e + 03 | 1 | 1 | 1.00e + 00 | 3.77e + 03 |
1 | 2 | 1.00e + 00 | 8.03e + 02 | 1 | 2 | 1.00e + 00 | 7.38e + 02 | 1 | 2 | 1.00e + 00 | 7.32e + 02 |
1 | 3 | 1.00e + 00 | 1.37e + 02 | 1 | 3 | 1.00e + 00 | 1.27e + 02 | 1 | 3 | 1.00e + 00 | 1.27e + 02 |
1 | 4 | 1.00e + 00 | 1.38e + 01 | 1 | 4 | 1.00e + 00 | 1.30e + 01 | 1 | 4 | 1.00e + 00 | 1.30e + 01 |
1 | 5 | 1.00e + 00 | 1.38e + 01 | 1 | 5 | 1.00e + 00 | 1.30e + 01 | 1 | 5 | 1.00e + 00 | 1.30e + 01 |
2 | 1 | 1.00e − 01 | 2.85e − 01 | 2 | 1 | 1.00e − 01 | 2.54e − 01 | 2 | 1 | 6.57e − 02 | 2.56e − 01 |
3 | 1 | 1.17e − 04 | 1.17e − 04 | 3 | 1 | 8.80e − 07 | 9.19e − 05 | 3 | 1 | 9.71e − 09 | 9.86e − 05 |
4 | 1 | 2.66e − 09 | 2.66e − 09 | 4 | 1 | 1.20e − 13 | 2.43e − 09 | 4 | 1 | 5.39e − 18 | 2.32e − 09 |
Iter | CPU | ρ1 | ρ2 | x1 | x2 |
---|---|---|---|---|---|
8 | 0.078 | 0.2 | 0.2 | (0.3916, 0.6083, 0) | (0.3247, 0.3625, 0.3128) |
8 | 0.094 | 0.4 | 0.4 | (0.4168, 0.5832, 0) | (0.3354, 0.3152, 0.3495) |
7 | 0.078 | 0.6 | 0.6 | (0.4292, 0.5708, 0) | (0.3477, 0.2821, 0.3702) |
7 | 0.078 | 0.8 | 0.8 | (0.4017, 0.5065, 0.0918) | (0.3686, 0.2503, 0.3812) |
9 | 0.078 | 1.0 | 1.0 | (0.3627, 0.4589, 0.1784) | (0.3891, 0.2258, 0.3851) |
6. Conclusion
In this paper, we have proposed a smoothing Newton method with appropriate parameter control based on the Fischer-Burmeister function for solving the SOCCP. We have shown its global and Q-quadratic convergence properties under some assumptions. In addition, we have considered some sufficient conditions for the assumptions. In numerical experiments, we have confirmed the effectiveness of the proposed methods.
Acknowledgments
The authors would like to thank the academic editor Prof. Jein-Shan Chen and the anonymous reviewers who helped them improve the original paper. The first author is supported in part by the Grant-in-Aid for Scientific Research (C) 25330030 and Young Scientists (B) 25870239 of Japan Society for the Promotion of Science. The third author is supported in part by the Grant-in-Aid for Young Scientists (B) 22760064 of Japan Society for the Promotion of Science.