Volume 2013, Issue 1 250943
Research Article
Open Access

Properties and Iterative Methods for the Q-Lasso

Maryam A. Alghamdi

Maryam A. Alghamdi

Department of Mathematics, Sciences Faculty for Girls, King Abdulaziz University, P.O. Box 4087, Jeddah 21491, Saudi Arabia kau.edu.sa

Search for more papers by this author
Mohammad Ali Alghamdi

Mohammad Ali Alghamdi

Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia kau.edu.sa

Search for more papers by this author
Naseer Shahzad

Naseer Shahzad

Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia kau.edu.sa

Search for more papers by this author
Hong-Kun Xu

Corresponding Author

Hong-Kun Xu

Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia kau.edu.sa

Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan nsysu.edu.tw

Search for more papers by this author
First published: 31 December 2013
Citations: 4
Academic Editor: Chi-Keung Ng

Abstract

We introduce the Q-lasso which generalizes the well-known lasso of Tibshirani (1996) with Q a closed convex subset of a Euclidean m-space for some integer m ≥ 1. This set Q can be interpreted as the set of errors within given tolerance level when linear measurements are taken to recover a signal/image via the lasso. Solutions of the Q-lasso depend on a tuning parameter γ. In this paper, we obtain basic properties of the solutions as a function of γ. Because of ill posedness, we also apply l1-l2 regularization to the Q-lasso. In addition, we discuss iterative methods for solving the Q-lasso which include the proximal-gradient algorithm and the projection-gradient algorithm.

1. Introduction

The lasso of Tibshirani [1] is the minimization problem:
()
where A is an m × n (real) matrix, bm, and γ > 0 is a tuning parameter. It is equivalent to the basis pursuit (BP) of Chen et al. [2]:
()
It is well known that both lasso and BP model a number of applied problems arising from machine learning, signal/image processing, and statistics, due to the fact that they promote the sparsity of a signal xn. Sparsity is popular phenomenon that occurs in practical problems since a solution may have a sparse representation in terms of an appropriate basis and therefore has been paid much attention.
Observe that both the lasso (1) and BP (2) can be viewed as the 1 regularization applied to the inverse linear system in n:
()
In sparse recovery, the system (3) is underdetermined (i.e., m < n and often mn indeed). The theory of compressed sensing of Donoho [3] and Candès et al. [4, 5] makes a breakthrough that under certain conditions the underdetermined system (3) can determine a unique k-sparse solution. (Recall that a signal xn is said to be k-sparse if the number of nonzero entries of x is no bigger than k.)
However, due to errors of measurements, the system (3) is actually inexact: Axb. It turns out that the BP (2) is reformulated as
()
where ε > 0 is the tolerance level of errors and ∥·∥ is a norm on n (often it is the p norm ∥·∥p for p = 1,2, ; a solution to (4) when the tolerance is measured by the norm ∥·∥ is known as the Dantzig selector by Candès and Tao [6]; see also [7]).
Note that if we let Q : = Bε(b) be the closed ball in m around b and with radius of ε, then (4) is rewritten as
()
Let now Q be a nonempty closed convex subset of m and let PQ be the projection from m onto Q. Then noticing the condition AxQ being equivalent to the condition AxPQ(Ax) = 0, we see that the problem (5) is solved via
()
Applying the Lagrangian method, we arrive at the following equivalent minimization:
()
where γ > 0 is a Lagrangian multiplier (also interpreted as a regularization parameter).
Alternatively, we may view (7) as the 1 regularization of the inclusion
()
which extends the linear system (3) in an obvious way. We refer to the problem (7) as the Q-lasso since it is the 1 regularization of inclusion (8) as lasso (1) is the 1 regularization of the linear system (3). Throughout the rest of this paper, we always assume that (8) is consistent (i.e., solvable).
Q-lasso (7) is also connected with the so-called split feasibility problem (SFP) of Censor and Elfving [8] (see also [9]) which is stated as finding a point x with the property
()
where C and Q are closed convex subsets of n and m, respectively. An equivalent minimization formulation of the SFP (9) is given as
()
Its 1 regularization is given as the minimization
()
where γ > 0 is a regularization parameter. Problem (7) is a special case of (11) when the set of constraints, C, is taken to be the entire space n.

The purpose of this paper is to study the behavior, in terms of γ > 0, of solutions to the regularized problem (7). (We leave the more general problem (11) to further work, due to the fact that the involvement of another closed convex set C brings some technical difficulties which are not easy to overcome.) We discuss iterative methods for solving the Q-lasso, including the proximal-gradient method and the projection-gradient method, the latter being derived via a duality technique. Due to ill posedness, we also apply the 1-2 regularization to the Q-lasso.

2. Preliminaries

Let n ≥ 1 be an integer and let n be the Euclidean n-space. If p ≥ 1, we use ∥·∥p to denote the p norm on n. Namely, for x = (xj) tn,
()
Let K be a closed convex subset of n. Recall that the projection from n to K is defined as the operator
()
The projection PK is characterized as follows:
()
Projections are nonexpansive. Namely, we have the following.

Proposition 1. One has that PK is firmly nonexpansive in the sense that

()
In particular, PK is nonexpansive; that is, ∥PKxPKy∥ ≤ ∥xy∥ for all x, yn.

Recall that function f : n is convex if
()
for all λ ∈ (0,1) and x, yn. (Note that we only consider finite-valued functions.)
The subdifferential of a convex function f is defined as the operator f given by
()
The inequality in (17) is referred to as the subdifferential inequality of f at x. We say that f is subdifferentiable at x if f(x) is nonempty. It is well known that, for an everywhere finite-valued convex function f on n, f is everywhere subdifferentiable.
Examples. (i) If f(x) = |x| for x, then f(0) = [−1,1]; (ii) of f(x) = ∥x1 for xn, then f(x) is given componentwise by
()
Here sgn  is the sign function; that is, for a,
()
Consider the unconstrained minimization problem
()
The following are well known.

Proposition 2. Let f be everywhere finite-valued on n.

  • (i)

    If f is strictly convex, then (20) admits at most one solution.

  • (ii)

    If f is convex and satisfies the coercivity condition

    ()

  • then there exists at least one solution to (20). Therefore, if f is both strictly convex and coercive, there exists one and only one solution to (20).

Proposition 3. Let f be everywhere finite-valued convex on n and zn. Suppose f is bounded below (i.e., inf {f(x) : xn}>−). Then z is a solution to minimization (20) if and only if it satisfies the first-order optimality condition:

()

3. Properties of the Q-Lasso

We study some basic properties of the Q-lasso which is repeated below
()
where γ > 0 is a regularization parameter. We also consider the following minimization (we call it Q-least squares problem):
()
Denote by S and Sγ the solution sets of (24) and (23), respectively. Since φγ is continuous, convex, and coercive (i.e., φγ(x) → as ∥x2), Sγ is closed, convex, and nonempty. Notice also that since we assume the consistency of (8), we have S; moreover, the solution sets of (8) and (24) coincide.

Observe that the assumption that S actually implies that Sγ is uniformly bounded in γ > 0, as shown by the lemma below.

Lemma 4. Assume that (24) is consistent (i.e., S). Then, for γ > 0 and xγSγ, one has .

Proof. Let xγSγ. In the relation

()
taking xS yields (for PQxQ)
()
It follows that
()
This proves the conclusion of the lemma.

Proposition 5. One has the following.

  • (i)

    The functions

    ()

  • are well defined for γ > 0. That is, they do not depend upon particular choice of xγSγ.

  • (ii)

    The function ρ(γ) is decreasing in γ > 0.

  • (iii)

    The function η(γ) is increasing in γ > 0.

  • (iv)

    (IPQ)Axγ is continuous in γ > 0.

Proof. For xγSγ, we have the optimality condition:

()
Here At is the transpose of A and stands for the subdifferential in the sense of convex analysis. Equivalently,
()
It follows by the subdifferential inequality that
()
In particular, for ,
()
Interchange xγ and to get
()
Adding up (32) and (33) yields
()
Consequently, . Moreover, (32) and (33) imply that and , respectively. Hence , and it follows that the functions
()
are well defined for γ > 0.

Now substituting xβSβ for x in (31), we get

()
Interchange γ and β and xγ and xβ to find
()
Adding up (36) and (37) and using the fact that (IPQ) is firmly nonexpansive, we deduce that
()
We therefore find that if γ > β, then . This proves that ρ(γ) is nonincreasing in γ > 0. From (38) it also follows that (IPQ)Axγ is continuous for γ > 0, which implies the continuity of η(γ) for γ > 0.

To see that η(γ) is increasing, we use the inequality (as xγSγ)

()
which implies that
()
Now if β > γ > 0, then, as , we immediately get that η(γ) ≤ η(β) and the increase of η is proven.

Proposition 6. One has the following.

  • (i)

    .

  • (ii)

    lim γ→0ρ(γ) = min xSx1.

Proof. (i) Taking the limit as γ → 0 in the inequality (and using the boundedness of (xγ))

()
yields
()
The result in (i) then follows.

As for (ii), we have, by (27), for any . In particular, , where x is an 1 minimum-norm element of S; that is, .

Assume γk → 0 is such that . Then for any x,

()
It follows that solves the minimization problem: ; that is, . Consequently,
()
This suffices to ensure that the conclusion of (ii) holds.

It is a challenging problem how to select the tuning (i.e., regularizing) parameter γ in lasso (1) and Q-lasso (7). There is no general rule to universally select γ which should instead be selected in a case-to-case manner. The following result however points out that γ cannot be large.

Proposition 7. Let Q be a nonempty closed convex subset of m and assume that Q-lasso (7) is consistent (i.e., solvable). If (note that this condition is reduced to for lasso (1) for which Q = {b}), then xγ = 0. (Here S is, as before, the solution set of the Q-least squares problem (24).)

Proof. Let xγSγ. The optimality condition

()
implies that
()
Taking x = 2xγ in subdifferential inequality (31) yields
()
It follows that
()
()

Now by Lemma 4, we have . Hence, from (49) it follows that if xγ ≠ 0, we must have . This completes the proof.

Notice that (48) shows that can be determined by Axγ. Hence, we arrive at the following characterization of solutions of Q-lasso (23).

Proposition 8. Let Q be a nonempty closed convex subset of m and let γ > 0 and xγSγ. Then is a solution of the Q-lasso (23) if and only if and . It turns out that

()
where N(A) = {xn : Ax = 0} is the null space of A and where Br denotes the closed ball centered at the origin and with radius of r > 0. This shows that if one can find one solution to Q-lasso (23), then all solutions are found by (50).

Proof. If , then from the relations

()
we obtain . This together with the assumption of yields that which in turn implies that and hence .

4. Iterative Methods

In this section we discuss the proximal iterative methods for solving Q-lasso (7). The basics are Moreau′s concept of proximal operators and their fundamental properties which are briefly mentioned below. (For the sake of our purpose, we however confine ourselves to the finite-dimensional setting.)

4.1. Proximal Operators

Let Γ0(n) be the space of convex functions in n that are proper, lower semicontinuous and convex.

Definition 9 (see [10], [11].)The proximal operator of φ ∈ Γ0(n) is defined by

()
The proximal operator of φ of order λ > 0 is defined as the proximal operator of λφ; that is,
()

For fundamental properties of proximal operators, the reader is referred to [12, 13] for details. Here we only mention the fact that the proximal operator proxλφ can have a closed-form expression in some important cases as shown in the examples below [12].

  • (a)

    If we take φ to be any norm ∥·∥ of n, then

    ()

  • In particular, if we take φ to be the absolute value function of the real line , we get

    ()

  • which is also known as the scalar soft-thresholding operator.

  • (b)

    Let be an orthonormal basis of n and let be real positive numbers. Define φ by

    ()

  • Then , where

    ()

  • In particular, if φ(x) = ∥x1 for xn, then

    ()

  • where αk = sgn (xk)max {|xk| − λ, 0} for k = 1, …, n.

4.2. Proximal-Gradient Algorithm

The proximal operators can be used to minimize the sum of two convex functions:
()
where f, g ∈ Γ0(n). It is often the case where one of them is differentiable. The following is an equivalent fixed point formulation of (59).

Proposition 10 (see [12], [14].)Let f, g ∈ Γ0(n). Let x*n and λ > 0. Assume f is finite valued and differentiable on n. Then x* is a solution to (59) if and only if x* solves the fixed point equation:

()

Fixed point equation (60) immediately yields the following fixed point algorithm which is also known as the proximal-gradient algorithm for solving (59).

Initialize x0n and iterate
()
where {λk} is a sequence of positive real numbers.

Theorem 11 (see [12], [14].)Let f, g ∈ Γ0(n) and assume (59) is consistent. Assume in addition the following.

  • (i)

    f is Lipschitz continuous on n:

    ()

  • (ii)

    0 < liminf nλn ≤ limsup nλn < 2/L.

Then the sequence (xk) generated by the proximal-gradient algorithm (61) converges to a solution of (59).

4.3. The Relaxed Proximal-Gradient Algorithm

The relaxed proximal-gradient algorithm generates a sequence (xk) by the following iteration process.

Initialize x0n and iterate
()
where {αk} is the sequence of relaxation parameters and {λk} is a sequence of positive real numbers.

Theorem 12 (see [14].)Let f, g ∈ Γ0(n) and assume (59) is consistent. Assume in addition the following.

  • (i)

    f is Lipschitz continuous on n:

    ()

  • (ii)

    0 < lim  inf nλn ≤ lim  sup nλn < 2/L.

  • (iii)

    0 < lim  inf nαn ≤ lim  sup nαn < 4/(2 + L · limsup nλn).

Then the sequence (xk) generated by proximal-gradient algorithm (61) converges to a solution of (59).

If we take λnλ ∈ (0, 2/L), then the relaxation parameters αk can be chosen from a larger pool; they are allowed to be close to zero. More precisely, we have the following theorem.

Theorem 13 (see [14].)Let f, g ∈ Γ0(n) and assume (59) is consistent. Define the sequence (xk) by the following relaxed proximal algorithm:

()
Suppose that
  • (a)

    f satisfies the Lipschitz continuity condition (i) in Theorem 12;

  • (b)

    0 < λ < 2/L and 0 ≤ αk ≤ (2 + λL)/4 for all k;

  • (c)

    .

Then (xn) converges to a solution of (59).

4.4. Proximal-Gradient Algorithms Applied to Lasso

For Q-lasso (7), we take and g(x) = γx1. Noticing that ∇f(x) = At(IPQ)Ax which is Lipschitz continuous with constant for IPQ is nonexpansive, we find that proximal-gradient algorithm (61) is reduced to the following algorithm for solving Q-lasso (7):
()

The convergence theorem of general proximal-gradient algorithm (61) reads the following for Q-lasso (7).

Theorem 14. Assume . Then the sequence (xk) generated by proximal-gradient algorithm (66) converges to a solution of lasso (7).

Remark 15. Relaxed proximal-gradient algorithms (63) and (65) also apply to Q-lasso (7). We however do not elaborate on them in detail.

Remark 16. Proximal-gradient algorithm (61) can be reduced to a projection-gradient algorithm in the case where the convex function g is homogeneous (i.e., g(tx) = tg(x) for t ≥ 0 and xn) because the homogeneity of g implies that the proximal operator of g is actually a projection; more precisely, we have

()
where K = g(0). As a result, proximal-gradient algorithm (61) is reduced to the following projection-gradient algorithm:
()

Now we apply projection-gradient algorithm (68) to Q-lasso (7). In this case, we have and g(x) = γx1 (homogeneous). Thus, ∇f(x) = At(IPQ)Ax and the convex set K = g(0) is given as K = γ(∥z1)|z=0 = γ[−1,1] n. We find that, for each positive number λ > 0, PλK is the projection of the Euclidean space n to the ball with radius of λγ; that is, {xn : ∥xλγ}. It turns out that proximal-projection algorithm (66) is rewritten as a projection algorithm below:

()

5. An 1-2 Regularization for the Q-Lasso

Q-lasso (7) may be ill posed and therefore needs to be regularized. Inspired by the elastic net [15] which regularizes lasso (1), we introduce an 1-2 regularization for the Q-Lasso as the minimization
()
where γ > 0 and δ > 0 are regularization parameters. This is indeed the traditional Tikhonov regularization applied to Q-lasso (7).
Let xγ,δ be the unique solution of (70) and set
()
which are the limits of φγ,δ(x) as δ → 0 and γ → 0, respectively. Let
()

Proposition 17. Assume the Q-least-squares problem

()
is consistent (i.e., solvable) and let S be its nonempty set of solutions.
  • (i)

    As  δ → 0   (for each fixed  γ > 0 ),    which is the ( 2 ) minimum-norm solution to  Q-lasso (7). Moreover, as  γ → 0, every cluster point of    is a   ( 1 )   minimum-norm solution of  Q-least squares problem (73), that is, a point in the set  argmin xSx1.

  • (ii)

    As  γ → 0   (for each fixed  δ > 0 ),    which is the unique solution to the  2  regularized problem:

    ()

  • Moreover, as  δ → 0,    which is the  2  minimal norm solution of   (73); that is,  .

Proof. We have that xγ,δ satisfies the optimality condition:

()
where the subdifferential of φγ,δ is given by
()
It turns out that the above optimality condition is reduced to
()
Using the subdifferential inequality, we obtain
()
for xn. Replacing x with for γ > 0 and δ > 0 yields
()
Interchange γ and γ and δ and δ to get
()
Adding up (79) and (80) results in
()
Since 1-2 regularization (70) is the Tikhonov regularization of Q-lasso (7), we get
()
Here c > 0 is a constant. It follows that {xγ,δ} is bounded.
  • (i)

    For fixed γ > 0, we can use the theory of Tikhonov regularization to conclude that xγ,δ is continuous in δ > 0 and converges, as δ → 0, to which is the (2) minimum-norm solution to Q-lasso (7), that is, the unique element . By Proposition 6, we also find that every cluster point of , as γ → 0, lies in the set argmin xSx1.

  • (ii)

    Fix δ > 0 and use Proposition 6 to see that as γ → 0. Now the standard property of Tikhonov′s regularization ensures that as δ → 0.

1-2 regularization (70) can be solved by proximal-gradient algorithm (61). Take and g(x) = γx1; then algorithm (61) is reduced to
()
The convergence of this algorithm is given as follows.

Theorem 18. Assume

()
Then the sequence (xk) generated by algorithm (83) converges to the solution of 1-2 regularization (70).

We can also take and . Then with ν = μγ/(1 + μδ), and the proximal algorithm (61) is reduced to
()
Here νk = γλk/(1 + δγk). Convergence of this algorithm is given below.

Theorem 19. Assume

()
Then the sequence (xk) generated by the algorithm (85) converges to the solution of 1-2 regularization (70).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

    Acknowledgments

    The authors were grateful to the anonymous referees for their helpful comments and suggestions which improved the presentation of this paper. This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, under Grant no. 2-363-1433-HiCi. The authors, therefore, acknowledge the technical and financial support of KAU.

        The full text of this article hosted at iucr.org is unavailable due to technical difficulties.