# Nonasymptotic bounds for vector quantization in Hilbert spaces

###### Abstract

Recent results in quantization theory show that the mean-squared expected distortion can reach a rate of convergence of , where is the sample size [see, e.g., IEEE Trans. Inform. Theory 60 (2014) 7279–7292 or Electron. J. Stat. 7 (2013) 1716–1746]. This rate is attained for the empirical risk minimizer strategy, if the source distribution satisfies some regularity conditions. However, the dependency of the average distortion on other parameters is not known, and these results are only valid for distributions over finite-dimensional Euclidean spaces.

This paper deals with the general case of distributions over separable, possibly infinite dimensional, Hilbert spaces. A condition is proposed, which may be thought of as a margin condition [see, e.g., Ann. Statist. 27 (1999) 1808–1829], under which a nonasymptotic upper bound on the expected distortion rate of the empirically optimal quantizer is derived. The dependency of the distortion on other parameters of distributions is then discussed, in particular through a minimax lower bound.

10.1214/14-AOS1293\volume43 \issue2 2015 \firstpage592 \lastpage619 \docsubtyFLA \newproclaimDefDefinition[section] \newproclaimexExample

Nonasymptotic bounds for vector quantization

A]\fnmsClément \snmLevrard\correflabel=e1]

class=AMS] \kwd62H30 Quantization \kwdlocalization \kwdfast rates \kwdmargin conditions

## 1 Introduction

Quantization, also called lossy data compression in information theory, is the problem of replacing a probability distribution with an efficient and compact representation, that is a finite set of points. To be more precise, let denote a separable Hilbert space, and let denote a probability distribution over . For a positive integer , a so-called -points quantizer is a map from to , whose image set is made of exactly points, that is . For such a quantizer, every image point is called a code point, and the vector composed of the code points is called a codebook, denoted by . By considering the pre-images of its code points, a quantizer partitions the separable Hilbert space into groups, and assigns each group a representative. General references on the subject are to be found in GL00 , Gersho91 and Linder02 among others.

The quantization theory was originally developed as a way to answer signal compression issues in the late 1940s (see, e.g., Gersho91 ). However, unsupervised classification is also in the scope of its application. Isolating meaningful groups from a cloud of data is a topic of interest in many fields, from social science to biology. Classifying points into dissimilar groups of similar items is more interesting as the amount of accessible data is large. In many cases data need to be preprocessed through a quantization algorithm in order to be exploited.

If the distribution has a finite second moment, the performance of a quantizer is measured by the risk, or distortion

where means integration of the function with respect to . The choice of the squared norm is convenient, since it takes advantages of the Hilbert space structure of . Nevertheless, it is worth pointing out that several authors deal with more general distortion functions. For further information on this topic, the interested reader is referred to GL00 or Fischer10 .

In order to minimize the distortion introduced above, it is clear that only quantizers of the type are to be considered. Such quantizers are called nearest-neighbor quantizers. With a slight abuse of notation, will denote the risk of the nearest-neighbor quantizer associated with a codebook .

Provided that has a bounded support, there exist optimal codebooks minimizing the risk (see, e.g., Corollary 3.1 in Fischer10 or Theorem 1 in Graf07 ). The aim is to design a codebook , according to an -sample drawn from , whose distortion is as close as possible to the optimal distortion , where denotes an optimal codebook.

To solve this problem, most approaches to date attempt to implement the principle of empirical risk minimization in the vector quantization context. Let denote an independent and identically distributed sample with distribution . According to this principle, good code points can be found by searching for ones that minimize the empirical distortion over the training data, defined by

If the training data represents the source well, then will hopefully also perform near optimally on the real source, that is, . The problem of quantifying how good empirically designed codebooks are, compared to the truly optimal ones, has been extensively studied, as, for instance, in Linder02 in the finite-dimensional case.

If , for some , it has been proved in Linder94 that , provided that has a bounded support. This result has been extended to the case where is a separable Hilbert space in Biau08 . However, this upper bound has been tightened whenever the source distribution satisfies additional assumptions, in the finite-dimensional case only.

When , for the special case of finitely supported distributions, it is shown in Antos04 that . There are much more results in the case where is not assumed to have a finite support.

In fact, different sets of assumptions have been introduced in Antos04 , Pollard82 or Levrard12 , to derive fast convergence rates for the distortion in the finite-dimensional case. To be more precise, it is proved in Antos04 that, if has a support bounded by and satisfies a technical inequality, namely for some fixed , for every codebook , there is a optimal codebook such that

(1) |

then , where depends on the natural parameters and , and also on , but only through and the technical parameter . However, in the continuous density and unique minimum case, it has been proved in Chou94 , following the approach of Pollard82 , that provided the Hessian matrix of is positive definite at the optimal codebook, converges in distribution to a law, depending on the Hessian matrix. As proved in Levrard12 , the technique used in Pollard82 can be slightly modified to derive a nonasymptotic bound of the type in this case, for some unknown .

As shown in Levrard12 , these different sets of assumptions turn out to be equivalent in the continuous density case to a technical condition, similar to that used in Massart06 to derive fast rates of convergence in the statistical learning framework.

Thus, a question of interest is to know whether some margin type conditions can be derived for the source distribution to satisfy the technical condition mentioned above, as has been done in the statistical learning framework in Tsybakov99 . This paper provides a condition, which can clearly be thought of as a margin condition in the quantization framework, under which condition (1) is satisfied. The technical constant has then an explicit expression in terms of natural parameters of from the quantization point of view. This margin condition does not require to have a finite dimension, or to have a continuous density. In the finite-dimensional case, this condition does not demand either that there exists a unique optimal codebook, as required in Pollard82 , hence seems easier to check.

Moreover, a nonasymptotic bound of the type is derived for distributions satisfying this margin condition, where is explicitly given in terms of parameters of . This bound is also valid if is infinite dimensional. This point may be of interest for curve quantization, as done in Fischer12 .

In addition, a minimax lower bound is given which allows one to discuss the influence of the different parameters mentioned in the upper bound. It is worth pointing out that this lower bound is valid over a set of probability distributions with uniformly bounded continuous densities and unique optimal codebooks, such that the minimum eigenvalues of the second derivative matrices of the distortion, at the optimal codebooks, are uniformly lower bounded. This result generalizes the previous minimax bound obtained in Theorem 4 of Antos05 for and .

This paper is organized as follows. In Section 2, some notation and definitions are introduced, along with some basic results for quantization in a Hilbert space. The so-called margin condition is then introduced, and the main results are exposed in Section 3: first an oracle inequality on the loss is stated, along with a minimax result. Then it is shown that Gaussian mixtures are in the scope of the margin condition. Finally, the main results are proved in Section 4 and the proofs of several supporting lemmas are deferred to the supplementary material supple .

## 2 Notation and definitions

Throughout this paper, for and in , and will denote, respectively, the closed and open ball with center and radius . For a subset of , will be denoted by . With a slight abuse of notation, is said to be -bounded if its support is included in . Furthermore, it will also be assumed that the support of contains more than points.

To frame quantization as an empirical risk minimization issue, the following contrast function is introduced as

where denotes a codebook, that is a -dimensional vector if . In this paper, only the case will be considered. The risk then takes the form , where we recall that denotes the integration of the function with respect to . Similarly, the empirical risk can be defined as , where is the empirical distribution associated with , in other words , for any measurable subset .

It is worth pointing out that, if is -bounded, for some , then there exist such minimizers and (see, e.g., Corollary 3.1 in Fischer10 ). In the sequel, the set of minimizers of the risk will be denoted by . Since every permutation of the labels of an optimal codebook provides an optimal codebook, contains more than elements. To address the issue of a large number of optimal codebooks, is introduced as a set of codebooks which satisfies

In other words, is a subset of the set of optimal codebooks which contains every element of , up to a permutation of the labels, and in which two different codebooks have different sets of code points. It may be noticed that is not uniquely defined. However, when is finite, all the possible have the same cardinality.

Let be a sequence of code points. A central role is played by the set of points which are closer to than to any other ’s. To be more precise, the Voronoi cell, or quantization cell associated with is the closed set defined by

Note that does not form a partition of , since may be nonempty. To address this issue, a Voronoi partition associated with is defined as a sequence of subsets which forms a partition of , and such that for every ,

where denotes the closure of the subset . The open Voronoi cell is defined the same way by

Given a Voronoi partition , the following inclusion holds, for in ,

and the risk takes the form

where denotes the indicator function associated with . In the case where are fixed subsets such that , for every , it is clear that

with equality only if , where denotes the conditional expectation of over the subset , that is,

Moreover, it is proved in Proposition 1 of Graf07 that, for every Voronoi partition associated with an optimal codebook , and every , . Consequently, any optimal codebook satisfies the so-called centroid condition (see, e.g., Section 6.2 of Gersho91 ), that is,

As a remark, the centroid condition ensures that , and, for every in , ,

A proof of this statement can be found in Proposition 1 of Graf07 . According to this remark, it is clear that, for every optimal Voronoi partition ,

(2) |

The following quantities are of importance in the bounds exposed in Section 3.1:

It is worth noting here that whenever is -bounded, and . If is finite, it is clear that and are positive. The following proposition ensures that this statement remains true when is not assumed to be finite.

###### Proposition 2.1

Suppose that is -bounded. Then both and are positive.

A proof of Proposition 2.1 is given in Section 4. The role of the boundaries between optimal Voronoi cells may be compared to the role played by the critical value for the regression function in the statistical learning framework (for a comprehensive explanation of this statistical learning point of view, see, e.g., Massart06 ). To draw this comparison, the following set is introduced, for any ,

The region is of importance when considering the conditions under which the empirical risk minimization strategy for quantization achieves faster rates of convergence, as exposed in Levrard12 . However, to completely translate the margin conditions given in Tsybakov99 to the quantization framework, the neighborhood of this region has to be introduced. For this purpose, the -neighborhood of the region is defined by . The quantity of interest is the maximal weight of these -neighborhoods over the set of optimal codebooks, defined by

It is straightforward that . Intuitively, if is small enough, then the source distribution is concentrated around its optimal codebook, and may be thought of as a slight modification of the probability distribution with finite support made of an optimal codebook . To be more precise, let us introduce the following key assumption.

[(Margin condition)] A distribution satisfies a margin condition with radius if and only if: {longlist}[(ii)]

is -bounded,

for all ,

(3) |

Note that, since , , and , (3) implies that . It is worth pointing out that Definition 2.1 does not require to have a density or a unique optimal codebook, up to relabeling, contrary to the conditions introduced in Pollard82 .

Moreover, the margin condition introduced here only requires a local control of the weight function . The parameter may be thought of as a gap size around every , as illustrated by the following example:

Assume that there exists such that if (e.g., if is supported on points). Then satisfies (3), with radius .

Note also that the condition mentioned in Tsybakov99 requires a control of the weight of the neighborhood of the critical value with a polynomial function with degree larger than . In the quantization framework, the special role played by the exponent leads to only consider linear controls of the weight function. This point is explained by the following example:

Assume that is -bounded, and that there exists and such that . Then satisfies (3), with

In the case where has a density and , the condition (3) may be considered as a generalization of the condition stated in Theorem 3.2 of Levrard12 , which requires the density of the distribution to be small enough over every . In fact, provided that has a continuous density, a uniform bound on the density over every provides a local control of with a polynomial function of degree 1. This idea is developed in the following example:

[(Continuous densities, )] Assume that , has a continuous density and is -bounded, and that is finite. In this case, for every , is differentiable at , with derivative

where denotes the -dimensional Lebesgue measure, considered over the -dimensional space . Therefore, if satisfies

(4) |

for every , then there exists such that satisfies (3). It can easily be deduced from (4) that a uniform bound on the density located at can provide a sufficient condition for a distribution to satisfy a margin condition. Such a result has to be compared to Theorem 3.2 of Levrard12 , where it was required that, for every ,

where denotes the Gamma function, and denotes the restriction of to the set . Note however that the uniform bound mentioned above ensures that the Hessian matrices of the risk function , at optimal codebooks, are positive definite. This does not necessarily imply that (4) is satisfied.

Another interesting parameter of from the quantization viewpoint is the following separation factor. It quantifies the difference between optimal codebooks and local minimizers of the risk.

Denote by the set of local minimizers of the map distortion . Let , then is said to be -separated if

(5) |

It may be noticed that local minimizers of the risk function satisfy the centroid condition, or have empty cells. Whenever , has a density and , it can be proved that the set of minimizers of coincides with the set of codebooks satisfying the centroid condition, also called stationary points (see, e.g., Lemma A of Pollard82 ). However, this result cannot be extended to noncontinuous distributions, as proved in Example 4.11 of GL00 .

The main results of this paper are based on the following proposition, which connects the margin condition stated in Definition 2.1 to the previous conditions in Pollard82 or Antos04 . Recall that .

###### Proposition 2.2

Assume that satisfies a margin condition with radius , then the following properties hold. {longlist}[(iii)]

For every in and in , if , then

(6) |

is finite.

There exists such that is -separated.

For all in ,

(7) |

where . , and

As a consequence, (7) ensures that (1) is satisfied, with known constant, which is the condition required in Theorem 2 of Antos04 . Moreover, if , has a unique optimal codebook up to relabeling, and has a continuous density, (6) ensures that the second derivative matrix of at the optimal codebook is positive definite, with minimum eigenvalue larger than . This is the condition required in Chou94 for to converge in distribution.

It is worth pointing out that the dependency of on different parameters of is known. This fact allows us to roughly discuss how should scale with the parameters , and , in the finite-dimensional case. According to Theorem 6.2 of GL00 , scales like , when has a density. Furthermore, it is likely that (see, e.g., the distributions exposed in Section 3.2). Considering that , , and leads to

At first sight, does not scale with , and seems to decrease with the dimension, at least in the finite-dimensional case. However, there is no result on how should scale in the infinite-dimensional case. Proposition 2.2 allows us to derive explicit upper bounds on the excess risk in the following section.

## 3 Results

### 3.1 Risk bound

The main result of this paper is the following.

###### Theorem 3.1

Assume that , and that satisfies a margin condition with radius . Let be defined as

If is an empirical risk minimizer, then, with probability larger than ,

(8) |

where is an absolute constant.

This result is in line with Theorem 3.1 in Levrard12 or Theorem 1 in Chichi13 , concerning the dependency on the sample size of the loss . The main advance lies in the detailed dependency on other parameters of the loss of . This provides a nonasymptotic bound for the excess risk.

To be more precise, Theorem 3.1 in Levrard12 states that

in the finite-dimensional case, for some unknown constant . In fact, this result relies on the application of Dudley’s entropy bound. This technique was already the main argument in Pollard82 or Chichi13 , and makes use of covering numbers of the -dimensional Euclidean unit ball. Consequently, strongly depends on the dimension of the underlying Euclidean space in these previous results. As suggested in Biau08 or Canas12 , the use of metric entropy techniques to derive bounds on the convergence rate of the distortion may be suboptimal, as it does not take advantage of the Hilbert space structure of the squared distance based quantization. This issue can be addressed by using a technique based on comparison with Gaussian vectors, as done in Canas12 . Theorem 3.1 is derived that way, providing a dimension-free upper bound which is valid over separable Hilbert spaces.

It may be noticed that most of results providing slow convergence rates, such as Theorem 2.1 in Biau08 or Corollary 1 in Linder94 , give bounds on the distortion which do not depend on the number of optimal codebooks. Theorem 3.1 confirms that is also likely to play a minor role on the convergence rate of the distortion in the fast rate case.

Another interesting point is that Theorem 3.1 does not require that has a density or is distributed over points, contrary to the requirements of the previous bounds in Pollard82 , Antos04 or Chichi13 which achieved the optimal rate of . Up to our knowledge, the more general result is to be found in Theorem 2 of Antos04 , which derives a convergence rate of without the requirement that has a density. It may also be noted that Theorem 3.1 does not require that contains a single element, contrary to the results stated in Pollard82 . According to Proposition 2.2, only (3) has to be proved for to satisfy the assumptions of Theorem 3.1. Since proving that may be difficult, even for simple distributions, it seems easier to check the assumptions of Theorem 3.1 than the assumptions required in Pollard82 . An illustration of this point is given in Section 3.3.

As will be shown in Proposition 3.1, the dependency on turns out to be sharp when . In fact, tuning this separation factor is the core of the demonstration of the minimax results in Bartlett98 or Antos05 .

### 3.2 Minimax lower bound

This subsection is devoted to obtaining a minimax lower bound on the excess risk over a set of distributions with continuous densities, unique optimal codebook, and satisfying a margin condition, in which some parameters, such as are fixed or uniformly lower-bounded. It has been already proved in Theorem 4 of Antos05 that the minimax distortion over distributions with uniformly bounded continuous densities, unique optimal codebooks (up to relabeling), and such that the minimum eigenvalues of the second derivative matrices at the optimal codebooks are uniformly lower-bounded, is , in the case where and . Extending the distributions used in Theorem 4 of Antos05 , Proposition 3.1 below generalizes this result in arbitrary dimension , and provides a lower bound over a set of distributions satisfying a uniform margin condition.

Throughout this subsection, only the case is considered, and will denote an empirically designed codebook, that is a map from to . Let be an integer such that , and . For simplicity, is assumed to be divisible by . Let us introduce the following quantities:

To focus on the dependency on the separation factor , the quantities involved in Definition 2.1 are fixed as

(9) |

Denote by the set of probability distributions which are -separated, have continuous densities and unique optimal codebooks, and which satisfy a margin condition with parameters defined in (9). The minimax result is the following.

###### Proposition 3.1

Assume that and . Then, for any empirically designed codebook,

where is an absolute constant, and

Proposition 3.1 is in line with the previous minimax lower bounds obtained in Theorem 1 of Bartlett98 or Theorem 4 of Antos05 . Proposition 3.1, as well as these two previous results, emphasizes the fact that fixing the parameters of the margin condition uniformly over a class of distributions does not guarantee an optimal uniform convergence rate. This shows that a uniform separation assumption is needed to derive a sharp uniform convergence rate over a set of distributions.

Furthermore, as mentioned above, Proposition 3.1 also confirms that the minimax distortion rate over the set of distributions with continuous densities, unique optimal codebooks, and such that the minimum eigenvalues of the Hessian matrices are uniformly lower bounded by , is still in the case where and .

This minimax lower bound has to be compared to the upper risk bound obtained in Theorem 3.1 for the empirical risk minimizer , over the set of distributions . To be more precise, Theorem 3.1 ensures that, provided that is large enough,

where depends only on , and . In other words, the dependency of the upper bounds stated in Theorem 3.1 on turns out to be sharp whenever . Unfortunately, Proposition 3.1 cannot be easily extended to the case where , with . Consequently, an open question is whether the upper bounds stated in Theorem 3.1 remains accurate with respect to in this case.

### 3.3 Quasi-Gaussian mixture example

The aim of this subsection is to illustrate the results exposed in Section 3 with Gaussian mixtures in dimension . The Gaussian mixture model is a typical and well-defined clustering example.

In general, a Gaussian mixture distribution is defined by its density

where denotes the number of components of the mixture, and the ’s denote the weights of the mixture, which satisfy . Moreover, the ’s denote the means of the mixture, so that , and the ’s are the variance matrices of the components.

We restrict ourselves to the case where the number of components is known, and match the size of the codebooks. To ease the calculation, we make the additional assumption that every component has the same diagonal variance matrix . Note that a similar result to Proposition 3.2 can be derived for distributions with different variance matrices , at the cost of more computing.

Since the support of a Gaussian random variable is not bounded, we define the “quasi-Gaussian” mixture model as follows, truncating each Gaussian component. Let the density of the distribution be defined by

where denotes a normalization constant for each Gaussian variable.

Let be defined as . Roughly, the model proposed above will be close the Gaussian mixture model when is small. Denote by the smallest possible distance between two different means of the mixture. To avoid boundary issues we assume that, for all , .

Note that the assumption can easily be satisfied if is chosen large enough. For such a model, Proposition 3.2 offers a sufficient condition for to satisfy a margin condition.

###### Proposition 3.2

Let , and . Assume that

(10) |

Then satisfies a margin condition with radius .

It is worth mentioning that has a continuous density, and that according to (i) in Proposition 2.2, the second derivative matrices of the risk function, at the optimal codebooks, must be positive definite. Thus, might be in the scope of the result in Pollard82 . However, there is no elementary proof of the fact that , whereas is finite is guaranteed by Proposition 2.2. This shows that the margin condition given in Definition 2.1 may be easier to check than the condition presented in Pollard82 . The condition (10) can be decomposed as follows. If

then every optimal codebook must be close to the vector of means of the mixture . Therefore, it is possible to approximately locate the ’s, and to derive an upper bound on the weight function defined above Definition 2.1. This leads to the second term of the maximum in (10).

This condition can be interpreted as a condition on the polarization of the mixture. A favorable case for vector quantization seems to be when the poles of the mixtures are well separated, which is equivalent to is small compared to , when considering Gaussian mixtures. Proposition 3.2 gives details on how has to be small compared to , in order to satisfy the requirements of Definition 2.1.

It may be noticed that Proposition 3.2 offers almost the same condition as Proposition 4.2 in Levrard12 . In fact, since the Gaussian mixture distributions have a continuous density, making use of (4) in Example 2.1 ensures that the margin condition for Gaussian mixtures is equivalent to a bound on the density over .

It is important to note that this result is valid when is known and matches exactly the number of components of the mixture. When the number of code points is different from the number of components of the mixture, we have no general idea of where the optimal code points can be located.

Moreover, suppose that there exists only one optimal codebook , up to relabeling, and that we are able to locate this optimal codebook . As stated in Proposition 2.2, the key quantity is in fact . In the case where , there is no simple relation between and . Consequently, a condition like in Proposition 3.2 could not involve the natural parameter of the mixture .

## 4 Proofs

### 4.1 Proof of Proposition 2.1

The lower bound on follows from a compactness argument for the weak topology on , exposed in the following lemma. For the sake of completeness, it is recalled that a sequence of elements in weakly converges to , denoted by , if, for every continuous linear real-valued function , . Moreover, a function from to is weakly lower semi-continuous if, for all , the level sets are closed for the weak topology.

###### Lemma 4.1

Let be a separable Hilbert space, and assume that is -bounded. Then: {longlist}[(iii)]

is weakly compact, for every ,

is weakly lower semi-continuous,

is weakly compact.

A more general statement of Lemma 4.1 can be found in Section 5.2 of Fischer10 , for quantization with Bregman divergences. However, since the proof is much simpler in the special case of the squared-norm based quantization in a Hilbert space, it is briefly recalled in Section A.1 (supplementary material supple ).

Let be a sequence of optimal codebooks such that , as . Then, according to Lemma 4.1, there exists a subsequence and an optimal codebook , such that , for the weak topology. Then it is clear that .

Since is weakly lower semi-continuous on (see, e.g., Proposition 3.13 in Brezis11 ), it follows that

Noting that is an optimal codebook, and the support of has more than points, Proposition 1 of Graf07 ensures that .

The uniform lower bound on follows from the argument that, since the support of contains more than points, then , where denotes the minimum distortion achievable for -points quantizers (see, e.g., Proposition 1 in Graf07 ). Denote by the quantity , and suppose that . Then there exists an optimal codebook of size , , such that . Let denote an optimal codebook of size , and define the following -points quantizer:

Since , for , Q is defined almost surely. Then it is easy to see that

Hence, the contradiction. Therefore, we have .

### 4.2 Proof of Proposition 2.2

The proof of (i) in Proposition 2.2 is based on the following lemma.

###### Lemma 4.2

Let and be in , and , for . Then

(11) | |||||

(12) |

The two statements of Lemma 4.2 emphasize the fact that, provided that and are quite similar, the areas on which the labels may differ with respect to and should be close to the boundary of Voronoi diagrams. This idea is mentioned in the proof of Corollary 1 in Antos04 . Nevertheless, we provide a simpler proof in Section A.2 (supplementary material supple ).

Equipped with Lemma 4.2, we are in a position to prove (6). Let be in , and be a Voronoi partition associated with , as defined in Section 2. Let be in , then can be decomposed as follows:

Since, for all , (centroid condition), we may write

from which we deduce that

which leads to

Since , . Thus, it remains to bound from above

Noticing that

and using Lemma 4.2, we get

Hence,

Consequently, if satisfies (3), then, if , it follows that

which proves (i).

Suppose that is not finite. According to Lemma 4.1, there exists a sequence of optimal codebooks and an optimal codebook such that for all , and . Assume that there exists in such that . Then , for every in . Let be in , and , then