Volume 2013, Issue 1 148490
Research Article
Open Access

ERM Scheme for Quantile Regression

Dao-Hong Xiang

Corresponding Author

Dao-Hong Xiang

Department of Mathematics, Zhejiang Normal University, Jinhua, Zhejiang 321004, China zjnu.edu.cn

Search for more papers by this author
First published: 28 March 2013
Academic Editor: Ding-Xuan Zhou

Abstract

This paper considers the ERM scheme for quantile regression. We conduct error analysis for this learning algorithm by means of a variance-expectation bound when a noise condition is satisfied for the underlying probability measure. The learning rates are derived by applying concentration techniques involving the 2-empirical covering numbers.

1. Introduction

In this paper, we study empirical risk minimization scheme (ERM) for quantile regression. Let X be a compact metric space (input space) and Y = . Let ρ be a fixed but unknown probability distribution on Z : = X × Y which describes the noise of sampling. The conditional quantile regression aims at producing functions to estimate quantile regression functions. With a prespecified quantile parameter τ ∈ (0,1), a quantile regression function fτ,ρ is defined by its value fτ,ρ(x) to be a τ-quantile of ρ(·∣x), that is, a value tY satisfying
()
where ρ(·∣x) is the conditional distribution of ρ at xX.
We consider a learning algorithm generated by ERM scheme associated with pinball loss and hypothesis space . The pinball loss Lτ : → [0, ) is defined by
()
The hypothesis space is a compact subset of C(X). So there exists some M > 0 such that ∥fC(X)M for any f. We assume without loss of generality ∥fC(X) ≤ 1 for any f.
The ERM scheme for quantile regression is defined with a sample drawn independently from ρ as follows:
()
A family of kernel based learning algorithms for quantile regression has been widely studied in a large literature [14] and references therein. The form of the algorithms is a regularized scheme in a reproducing kernel Hilbert space K (RKHS, see [5] for details) associated with a Mercer kernel K. Given a sample z the kernel based regularized scheme for quantile regression is defined by
()
In [1, 3, 4], error analysis for general K has been done. Learning with varying Gaussian kernel was studied in [2].

ERM scheme (3) is very different from kernel based regularized scheme (4). The output function fz produced by the ERM scheme has a uniform bound, under our assumption, ∥fzC(X) ≤ 1. However, we cannot expect it for fz,λ. It is easy to see that by choosing f = 0. It happens often that ∥fz,λK as λ → 0. The lack of a uniform bound for fz,λ has a serious negative impact on the learning rates. So in the literature of kernel based regularized schemes for quantile regression, values of the output function fz,λ are always projected onto the interval [−1,1], and error analysis is conducted for the projected function, not fz,λ itself.

In this paper, we aim at establishing convergence and learning rates for the error in the space . Here r > 0 depends on the pair (ρ, τ) which will be decided in Section 2 and ρX is the marginal distribution of ρ on X. In the rest of this paper, we assume Y = [−1,1] which in turn leads to values of the target function fτ,ρ lie in the same interval.

2. Noise Condition and Main Results

There has been a large literature in learning theory (see [6] and references therein) devoted to the least square regression. It aims at learning the regression function fρ(x) = ∫Yydρ(yx). The identity for the generalization error ls(f) = ∫Z (yf(x)) 2dρ leads to a variance-expectation bound with the form of Eξ2 ≤ 4Eξ, where ξ = (yf(x)) 2 − (yfρ(x)) 2 on (Z, ρ). It plays an essential role in error analysis of kernel based regularized schemes.

However, this identity relation and expectation-variance bound fail in the setting of the quantile regression. The reason is that the pinball loss is lack of strong convexity. If we add some noise condition on distribution ρ named τ-quantile of p-average type q (see Definition 1), we can also get a similar identity relation which in turn enables us to have a variance-expectation bound stated in the following which is proved by Steinwart and Christman [1].

Definition 1. Let p ∈ (0, ] and q ∈ [1, ). A distribution ρ on X × [−1,1] is said to have a τ-quantile of p-average type q if for ρX-almost every xX, there exist a τ-quantile t* and constants αρ(·∣x) ∈ (0,2],   bρ(·∣x) > 0 such that for each s ∈ [0, αρ(·∣x)],

()
and that the function γ on X defined by satisfies .

We also need capacity of the hypothesis space for our learning rates. Here in this paper, we measure the capacity by empirical covering numbers.

Definition 2. Let (𝔐, d) be a pseudometric space and S be a subset of 𝔐. For every ε > 0, the covering number 𝒩(S, ε, d) of S with respect to ε and d is defined as the minimal number of balls of radius ε whose union covers S, that is,

()
where B(sj, ε) = {s𝔐 : d(s, sj) ≤ ε} is a ball in 𝔐.

Definition 3. Let be a set of functions on X, and . Set 𝒩2,x(, ε) = 𝒩(|x, ε, d2). The 2-empirical covering number of is defined by

()
Here d2 is the normalized 2-metric on the Euclidean space k given by
()

Assumption. Assume that the empirical covering number of the hypothesis space is bounded for some a > 0 and ι ∈ (0,2),
()

Theorem 4. Assume that ρ satisfies (5) with some p ∈ (0, ] and q ∈ [1, ). Denote r = pq/(p + 1). One further assumes that fτ,ρ is uniquely defined. If fτ,ρ and satisfies (9) with ι ∈ (0,2), then for any 0 < δ < 1, with confidence 1 − δ, one has

()
where
()
and is a constant independent of m and δ.

Remark 5. In the ERM scheme, we can choose fτ,ρ which in turn makes the approximation error described by (23) equal to zero. However, it is impossible for the kernel based regularized scheme because of the appearance of the penalty term .

If q ≤ 2, all conditional distributions around the quantile behave similar to the uniform distribution. In this case r = pq/(p + 1) ≤ 2 and θ = min {2/q, p/(p + 1)} = p/(p + 1) for all p > 0. Hence, ϑ = 2(p + 1)/((2 + ι)pq + 4q). Furthermore, when p is large enough, the parameter r tends to q and the power index for the above learning rate arbitrarily approaches to 2/(2 + ι)q which shows that the learning rate power index for is arbitrarily close to 2/(2 + ι) independent of q. In particular, ι can be arbitrarily small when is smooth enough. In this case, the power index of the learning rates 2/(2 + ι) can be arbitrarily close to 1 which is the optimal learning rate for the least square regression.

Let us take some examples to demonstrate the above main result.

Example 6. Let be a unit ball of the sobolev space Hs with s > 0. Observe that the empirical covering number is bounded above by the uniform covering number defined in Definition 2. Hence we have (see [6, 7])

()
where n is the dimension of the input space X and Cs > 0.

Under the same assumptions as Theorem 4, we get that by replacing ι by n/s, for any 0 < δ < 1, with confidence 1 − δ,

()
where
()
and is a constant independent of m and δ.

We carry out the same discussions on the case of q ≤ 2 and large enough p as Remark 5. Therefore the power index of the learning rates for is arbitrarily close to 2/(2 + n/s) independent of q. Furthermore, s can be arbitrarily large if the Sobolev space is smooth enough. In this special case, the learning rate power index arbitrarily approaches to 1.

Example 7. Let be a unit ball of the reproducing kernel Hilbert space σ generated by a Gaussian kernel (see [5]). Reference [7] tells us that

()
where Cn,σ > 0 depends only on n and σ > 0. Obviously, the right-hand side of (15) is bounded by Cn,σ(1/ε) n+1.

So from Theorem 4, we can get different learning rates with power index

()
If q ≤ 2 and p is large enough, the power index of the learning rates for is arbitrarily close to 2/(3 + n) which is very slow if n is large. However, in most data sets the data are concentrated on a much lower dimensional manifold embedded in the high dimensional space. In this setting an analysis that replaces n by the intrinsic dimension of the manifold would be of great interest (see [8] and references therein).

3. Error Analysis

Define the noise-free error called generalization error associated with the pinball loss Lτ as
()
Then the measurable function fτ,ρ is a minimizer of τ. Obviously, fτ,ρ(x)∈[−1,1].

We need the following results from [1] for our error analysis.

Proposition 8. Let Lτ be the pinball loss. Assume that ρ satisfies (5) with some p ∈ (0, ] and q ∈ [1, ). Then for all f : X → [−1,1] one has

()
Furthermore, with
()
one has
()

The above result implies that we can get convergence rates of fz in the space by bounding the excess generalization error τ(fz) − τ(fτ,ρ).

To bound τ(fz) − τ(fτ,ρ), we need a standard error decomposition procedure [6] and a concentration inequality.

3.1. Error Decomposition

Define the empirical error associated with the pinball loss Lτ as
()
Define
()

Lemma 9. Let Lτ be the pinball loss, fz be defined by (3) and f by (22). Then

()

Proof. The excess generalization error can be written as

()
The definition of fz implies that z,τ(fz) − z,τ(f) ≤ 0. Furthermore, by subtracting and adding τ(fτ,ρ) and z,τ(fτ,ρ) in the first term and third term, we see that Lemma 9 holds true.

We call the term (23) approximation error. It has been studied in [9].

3.2. Concentration Inequality and Sample Error

Let us recall the one-sided Bernstein inequality as follows.

Lemma 10. Let ξ be a random variable on a probability space Z with variance σ2 satisfying |ξ𝔼(ξ)| ≤ Mξ for some constant Mξ. Then for any 0 < δ < 1, with confidence 1 − δ, one has

()

Proposition 11. Let f. Assume that ρ on X × [−1,1] satisfies the variance bound (20) with index θ indicated in (19). For any 0 < δ < 1, with confidence 1 − δ/2, (3.6) can be bounded as

()

Proof. Let ξ(z) = Lτ(yf(x)) − Lτ(yfτ,ρ(x)) which satisfies |ξ | ≤ 2 and in turn |ξE(ξ)| ≤ 2. The variance bound (20) implies that

()
Using (25) on the random variable ξ(z) = Lτ(yf(x)) − Lτ(yfτ,ρ(x)), we can get the desired bound (28) with the help of Young’s inequality.

Let us turn to estimate the sample error (3.5) involving the function fz which runs over a set of functions since z is a random sample itself. To estimate it, we use a concentration inequality below involving empirical covering numbers [1012].

Lemma 12. Let be a class of measurable functions on Z. Assume that there are constants B, c > 0 and α ∈ [0,1] and ∥fB and 𝔼f2c(𝔼f) α for every f. If (7) holds, then there exists a constant depending only on ι such that for any t > 0, with probability at least 1 − et, there holds

()
where
()

We apply Lemma 12 to a function set , where
()

Proposition 13. Assume ρ on X × [−1,1] satisfies the variance bound (20) with index θ indicated in (19). If satisfies (9) with ι ∈ (0,2), then for any 0 < δ < 1, with confidence 1 − δ/2, one has

()
where
()

Proof. Take g with the form g(z) = Lτ(yf(x)) − Lτ(yfτ,ρ(x)) where f. Hence 𝔼g = τ(f) − τ(fτ,ρ) and .

The Lipschitz property of the pinball loss Lτ implies that

()
For g1, g2, we have
()
where f1, f2. It follows that
()
Hence
()

Applying Lemma 12 with B = 2, α = θ, and c = Cθ, we know that for any 0 < δ < 1, with confidence 1 − δ/2, there holds

()
Here
()
Note that
()
where Cι,θ is indicated in (32). Then our desired bound holds true.

Proposition 14. Assume ρ on X × [−1,1] satisfies the variance bound (20) with index θ indicated in (19). If satisfies (9) with ι ∈ (0,2) Then for any 0 < δ < 1, with confidence 1 − δ, there holds

()

The above bound follows directly from Propositions 11 and 13 with the fact that fz.

3.3. Bounding the Total Error

Now we are in a position to present our general result on error analysis for algorithm (3).

Theorem 15. Assume that ρ satisfies (5) with some p ∈ (0, ] and q ∈ [1, ). Denote γ = pq/(p + 1). Further assume that satisfies (9) with ι ∈ (0,2) and fτ,ρ is uniquely defined. Then for any 0 < δ < 1, with confidence 1 − δ, one has

()
where
()
and are constant independent of m and δ.

Proof. Combining (18), (19), and (40), with confidence 1 − δ, we have

()
where
()

Proof of Theorem 4. The assumption fτ,ρ implies that

()

Therefore, our desired result comes directly from Theorem 15.

4. Further Discussions

In this paper, we studied ERM algorithm (3) for quantile regression and provide convergence and learning rates. We showed some essential differences between ERM scheme and kernel based regularized scheme for quantile regression. We also point out the difficulty to deal with quantile regression: the lack of strong convexity of the pinball loss. To overcome this difficulty, some noise condition on ρ is proposed to enable us to get a variance-expectation bound similar to the one for the least square regression.

In our analysis we just consider f and ∥fC(X) ≤ 1. The case ∥fC(X)R for R ≥ 1 would be interesting in the future work. The approximation error involving R can be estimated by the knowledge of interpolation space.

In our setting, the sample is drawn independently from the distribution ρ. However, in many practical problems, the i.i.d condition is a little demanding, so it would be interesting to investigate the ERM scheme for quantile regression with nonidentical distributions [13, 14] or dependent sampling [15].

Acknowledgment

This work described in this paper is supported by NSF of China under Grant 11001247 and 61170109.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.