Volume 48, Issue 2 e70002
ORIGINAL ARTICLE
Open Access

Sparse graph signals – uncertainty principles and recovery

Tarek Emmrich

Tarek Emmrich

Osnabrück University, Institute of Mathematics, Osnabrück, Germany

Search for more papers by this author
Martina Juhnke

Martina Juhnke

Osnabrück University, Institute of Mathematics, Osnabrück, Germany

Search for more papers by this author
Stefan Kunis

Corresponding Author

Stefan Kunis

Osnabrück University, Institute of Mathematics, Osnabrück, Germany

Correspondence

Stefan Kunis, Osnabrück University, Institute of Mathematics, Osnabrück, Germany.

Email: [email protected]

Search for more papers by this author
First published: 07 April 2025

ABSTRACT

We study signals that are sparse either on the vertices of a graph or in the graph spectral domain. Recent results on the algebraic properties of random integer matrices as well as on the boundedness of eigenvectors of random matrices imply two types of support size uncertainty principles for graph signals. Indeed, the algebraic properties imply uniqueness results if a sparse signal is sampled at any set of minimal size in the other domain. The boundedness properties of eigenvectors imply stable reconstruction by basis pursuit if a sparse signal is sampled at a slightly larger randomly selected set in the other domain.

1 INTRODUCTION

In recent years, both, graph signal processing [21, 24, 31] as well as sparse recovery by random subsampling (or more specifically compressive sensing) [14] have been widely studied. Graph signal processing is motivated by applications where real life data comes with an additional graph structure. Sparse recovery received a lot of attention over the last two decades with several theoretical results and algorithms available. It is a natural question, whether these methods have strengthenings under the additional assumption that those signals have some graph relation [5, 7, 22, 26, 37, 38]. In this manuscript, we consider signals on graphs as maps from the vertices to the complex numbers. The graph Fourier transform is then defined by the dot product of this signal with the eigenvectors of a graph related matrix, for example, its adjacency or (normalized) Laplacian matrix.

We study two closely related instances of recovery problems: firstly, the reconstruction of a frequency sparse signal from few signal samples at the vertices of the graph and secondly, the reconstruction of a vertex sparse signal from few Fourier coefficients. For concreteness, both cases are illustrated in Example 1.1.

Example 1.1.In Figures 1 and 2, two graph signals f = U f ^ $$ f=U\hat{f} $$ and their unitary Fourier transform f ^ $$ \hat{f} $$ , for the eigenvector matrix U $$ U $$ of the symmetric adjacency matrix, are pictured. The size of each vertex indicates the absolute value of the graph signal, while red and blue indicate a positive and negative sign, respectively. The Fourier transform f ^ $$ \hat{f} $$ is plotted against the eigenvalues of the adjacency matrix. The first signal has a 3-sparse Fourier transform f ^ $$ \hat{f} $$ and we ask how many samples of f $$ f $$ are required for recovery of f ^ $$ \hat{f} $$ , see (2.1) and (2.2). The second signal f $$ f $$ is 4-sparse in the vertex domain and we ask how many coefficients f ^ $$ \hat{f} $$ are required for full recovery of f $$ f $$ , see (2.3) and (2.4).

Details are in the caption following the image
A 3-frequency sparse signal f = U f ^ $$ f=U\hat{f} $$ on a graph (left) and its synthesis Fourier coefficients f ^ $$ \hat{f} $$ on the eigenvalues (right).
Details are in the caption following the image
A 4-vertex sparse signal f $$ f $$ (left) and its analysis Fourier coefficients ĝ = U f $$ \hat{g}={U}^{\ast }f $$ on the eigenvalues (right).

The analysis of these sparse recovery problems is closely related to two classical uncertainty principles, the first being equivalent to the property that the eigenvector matrix of a graph related matrix has no vanishing minors, the second applies if this eigenvector matrix is unitary and motivates to consider the maximal absolute value over all eigenfunctions as a measure of localization. Both uncertainty principles are well-known for the ordinary discrete Fourier transform [6, 35] and marked the onset of compressive sensing some years ago. A recent survey on connecting sparse recovery and uncertainty principles can be found at [28].

In this manuscript, we consider the generalization of the algebraic uncertainty principle [35] from the circle graph of prime length to certain classes of random graphs. Recent results for the Galois group of random integer matrices [10, 13] imply the algebraic uncertainty principle [11] and thus give rise to uniqueness results for any sampling set of minimal size. Moreover, we combine compressive sensing results for randomly subsampled bounded orthonormal systems [14, Ch. 12] with recent findings on the delocalization of eigenfunctions for certain random graphs [1, 16, 18, 27]. Section 2 sets up all necessary notation and formulates the two main results in Theorems 2.1 and 2.3. Sections 3 and 4 then prove the algebraic and the compressive sensing results, respectively.

2 SETUP AND MAIN RESULTS

For a finite set V $$ V $$ and E V 2 $$ E\subseteq \left(\genfrac{}{}{0ex}{}{V}{2}\right) $$ , where V s $$ \left(\genfrac{}{}{0ex}{}{V}{s}\right) $$ denotes the set of all subsets of V $$ V $$ of size s $$ s $$ , we call G = ( V , E ) $$ G=\left(V,E\right) $$ an undirected graph. We often omit ‘undirected’ and assume V = [ n ] = { 1 , , n } $$ V=\left[n\right]=\left\{1,\dots, n\right\} $$ . For E V 2 $$ E\subseteq {V}^2 $$ , we call G = ( V , E ) $$ G=\left(V,E\right) $$ a directed graph. The elements of V $$ V $$ are called the vertices. We say two vertices v , w $$ v,w $$ are connected, if { v , w } E $$ \left\{v,w\right\}\in E $$ or ( v , w ) E $$ \left(v,w\right)\in E $$ , and write v w $$ v\sim w $$ . For two vertices v , w $$ v,w $$ , the distance dist ( v , w ) $$ \mathrm{dist}\left(v,w\right) $$ is the smallest integer r $$ r $$ such that there exist vertices v = v 0 , v 1 , , v r = w $$ v={v}_0,{v}_1,\dots, {v}_r=w $$ with { v k , v k + 1 } E $$ \left\{{v}_k,{v}_{k+1}\right\}\in E $$ for 0 k r 1 $$ 0\le k\le r-1 $$ . The adjacency matrix A { 0 , 1 } n × n $$ A\in {\left\{0,1\right\}}^{n\times n} $$ of G $$ G $$ is defined by
A v , w = 1 , if v w , 0 , otherwise . $$ {A}_{v,w}=\left\{\begin{array}{ll}1,\kern1em & \mathrm{if}\kern0.3em v\sim w,\\ {}0,\kern1em & \mathrm{otherwise}.\end{array}\right. $$
The adjacency matrix of an undirected graph is symmetric and we define the combinatorial Laplacian matrix L = D A n × n $$ L=D-A\in {\mathbb{R}}^{n\times n} $$ , where D $$ D $$ is the diagonal matrix, whose diagonal entries D v , v = deg ( v ) = w v A v , w $$ {D}_{v,v}=\deg (v)={\sum}_{w\ne v}{A}_{v,w} $$ equal the degrees for all v V $$ v\in V $$ .
For undirected graphs, the Laplacian matrix and the adjacency matrix are symmetric and obey the eigendecomposition
L = U L Λ L U L , A = U A Λ A U A , $$ L={U}_L{\Lambda}_L{U}_L^{\ast },\kern2em A={U}_A{\Lambda}_A{U}_A^{\ast }, $$
where both U L $$ {U}_L $$ , U A $$ {U}_A $$ have real orthogonal columns and Λ L $$ {\Lambda}_L $$ , Λ A $$ {\Lambda}_A $$ are real diagonal matrices. In case of a directed graph, we assume that the adjacency matrix A $$ A $$ is diagonalizable as
A = U A Λ A U A 1 , $$ A={U}_A{\Lambda}_A{U}_A^{-1}, $$
where U A $$ {U}_A $$ has complex normalized linearly independent columns and Λ A $$ {\Lambda}_A $$ is a complex diagonal matrix. If no confusion might occur, we omit subscripts in all three cases and write Λ = diag ( λ 1 , , λ n ) $$ \Lambda =\operatorname{diag}\left({\lambda}_1,\dots, {\lambda}_n\right) $$ and U = [ u 1 u n ] $$ U=\left[{u}_1\dots {u}_n\right] $$ with column vectors u k n $$ {u}_k\in {\mathbb{C}}^n $$ which we interpret as functions on the vertices of the graph u k : V $$ {u}_k:V\to \mathbb{C} $$ , V v u k ( v ) $$ V\ni v\mapsto {u}_k(v) $$ . Additionally, for sets K , W V = [ n ] $$ K,W\subseteq V=\left[n\right] $$ we let U W , K $$ {U}_{W,K} $$ denote the submatrix of U $$ U $$ when restricting to the row indices in W $$ W $$ and column indices in K $$ K $$ . Please note that we have U W , K = ( U W , K ) $$ {U}_{W,K}^{\ast }={\left({U}_{W,K}\right)}^{\ast } $$ and U W , { k } = u k , W $$ {U}_{W,\left\{k\right\}}={u}_{k,W} $$ , where similar as before u k , W $$ {u}_{k,W} $$ denotes the restriction of the vector u k $$ {u}_k $$ to the row indices in W $$ W $$ .
We are interested in the analysis of signals defined on the vertices of the graph, that is, vectors f n $$ f\in {\mathbb{C}}^n $$ are identified with functions f : V $$ f:V\to \mathbb{C} $$ , and we consider two scenarios:
  1. We ask under which conditions an s $$ s $$ -frequency-sparse signal
    f = k K f ^ k u k , f ^ k 0 , K [ n ] s , s n , $$ f=\sum \limits_{k\in K}{\hat{f}}_k{u}_k,\kern2em {\hat{f}}_k\ne 0,K\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{s}\right),s\ll n, $$ ()
    can be recovered from its vertex-samples
    f ( w ) , w W , W [ n ] m , m n , $$ f(w),\kern2em w\in W,W\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{m}\right),m\ll n, $$ ()
    without knowing the frequency-support K $$ K $$ .
  2. Analogously, we ask under which conditions an s $$ s $$ -vertex-sparse signal
    f = w W a w δ w , a w 0 , W V s , s n , δ w ( v ) : = 1 , v = w , 0 , v w , $$ f=\sum \limits_{w\in W}{a}_w{\delta}_w,\kern2em {a}_w\ne 0,W\in \left(\genfrac{}{}{0ex}{}{V}{s}\right),s\ll n,{\delta}_w(v):= \left\{\begin{array}{ll}1,\kern1em & v=w,\\ {}0,\kern1em & v\ne w,\end{array}\right. $$ ()
    can be recovered from its analysis-coefficients
    ĝ k = f , u k = w W a w u k ( w ) , k K , K [ n ] m , m n , $$ {\hat{g}}_k=\left\langle f,{u}_k\right\rangle =\sum \limits_{w\in W}{a}_w\overline{u_k(w)},\kern2em k\in K,K\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{m}\right),m\ll n, $$ ()
    without knowing the vertex-support W $$ W $$ . We note in passing that f = k [ n ] ĝ k u k $$ f={\sum}_{k\in \left[n\right]}{\hat{g}}_k{u}_k $$ if and only if { u k : k [ n ] } $$ \left\{{u}_k:k\in \left[n\right]\right\} $$ are orthonormal as for the undirected graph situation.

Theorem 2.1. (deterministic sampling)The s $$ s $$ -frequency-sparse reconstruction problem (2.2) as well as the s $$ s $$ -vertex-sparse reconstruction problem (2.4) are well posed for any sampling set of minimal size m = 2 s $$ m=2s $$ in the following situations:

  • (i)

    deterministically if G $$ G $$ is a circle graph of prime length, see Section 3.3,

  • (ii)

    with very large probability if G $$ G $$ is a dense directed Erdős-Rényi graph with loops, see Section 3.4.

Beyond combinatorial algorithms, the s $$ s $$ -frequency-sparse reconstruction problem (2.2) allows for a so-called subspace reconstruction method:

  • (iii)

    deterministically from 2 s $$ 2s $$ consecutive samples if G $$ G $$ is a circle graph, see Section 3.3,

  • (iv)

    with very large probability from sampling either the s $$ s $$ -neighborhood of s $$ s $$ arbitrary vertices or the ( 2 s 1 ) $$ \left(2s-1\right) $$ -neighborhood of an arbitrary vertex if G $$ G $$ is a dense directed Erdős-Rényi graph with loops, see Section 3.4.

  • (v)

    with very large probability from sampling the 2 s $$ 2s $$ -neighborhood of an arbitrary vertex if G $$ G $$ is a dense Erdős-Rényi graph with loops, see Section 3.5.

All probabilistic results are conditional on the extended Riemann hypothesis (due to a reduction via primes and an error term in the prime ideal theorem, see [10, Sect. 1.1] for an introduction).

Example 2.2.We consider Erdős-Rényi graphs with loops as defined in Section 3.4, with n $$ n $$ vertices and edge probability p n $$ {p}_n $$ . In the left picture of Figure 3 we display the proportion of adjacency matrices whose characteristic polynomials have full Galois group. In the sparse regime we expect isolated vertices and hence no full Galois group, while in the dense regime we expect full Galois group [10]. The critical regime has not been studied theoretically. According to [11, Theorem 2], a full Galois group is a sufficient condition for nonvanishing minors of any size, which yields reconstruction in Theorem 2.1 (ii) and (iv), see Section 3.

In the right picture of Figure 3, we consider Erdős-Rényi graphs with loops. Here, we display the proportion of adjacency matrices whose characteristic polynomial is irreducible. Again, in the sparse regime we expect isolated vertices and hence a reducible polynomial. In the dense regime we expect irreducibility [13] and the critical regime has not been studied. Eigenvectors of graphs with an irreducible characteristic polynomial do not contain a zero, see [11, Theorem 6], which yields the reconstruction in Theorem 2.1 (v).

Details are in the caption following the image
Left: Proportion of directed Erdős-Rényi graphs with loops whose adjacency matrix has a characteristic polynomial with full Galois group. Right: Proportion of Erdős-Rényi graphs with loops, whose adjacency matrix has an irreducible characteristic polynomial. In both pictures: for each n $$ n $$ we sample 1000 graphs, red p n = 4 / n $$ {p}_n=4/n $$ , blue p n = 2 log ( n ) / n $$ {p}_n=2\log (n)/n $$ and black p n = 1 / 2 $$ {p}_n=1/2 $$ . The number of vertices n $$ n $$ is displayed on the x $$ x $$ -axis.

Beyond Sections 3.3-3.5, we refer to [12] for more details on the subspace methods in Theorem 2.1 (iii)–(v).

Theorem 2.3. (random sampling)The s $$ s $$ -frequency-sparse reconstruction problem (2.2) as well as the s $$ s $$ -vertex-sparse reconstruction problem (2.4) are well posed if the sampling set is drawn uniformly at random and of size m C s log α n $$ m\ge Cs{\log}^{\alpha }n $$ , for some constants C , α > 0 $$ C,\alpha >0 $$ , in the following situations:

  • (i)

    with very large probability if G $$ G $$ is a circle, path, grid, or hypercube graph, see Section 4.3,

  • (ii)

    with large probability if G $$ G $$ is a sparse random regular graph, see Section 4.4,

  • (iii)

    with moderately large probability and log α n $$ {\log}^{\alpha }n $$ replaced by n κ $$ {n}^{\kappa } $$ , for any κ > 0 $$ \kappa >0 $$ , if G $$ G $$ is an Erdős-Rényi graph, see Section 4.5.

In all these situations, 1 $$ {\ell}^1 $$ -minimization (a.k.a. Basis pursuit) allows for stable reconstruction.

Note that we also expect a logarithmic dependence of the sampling size m $$ m $$ on the graph size n $$ n $$ in case (iii), but have a rigorous proof only for the stated dependence, see also Remark 4.9.

Example 2.4.Let n = 101 $$ n=101 $$ be the number of vertices fixed. The first two experiments consider the directed circle graph for which the eigenvector matrix is the ordinary Fourier matrix U = F n $$ U={F}_n $$ . As discussed in Section 3.3, every square submatrix is non-singular and thus every rectangular submatrix has finite condition number. However, if both the set of columns K $$ K $$ and the set of rows W $$ W $$ are chosen contiguously, then the condition number can be huge as Figure 4 (left) shows. This is also theoretically well understood, see for example, [2].

On the other hand, if the set of rows W $$ W $$ is chosen uniformly at random, then every set of columns K $$ K $$ being a logarithmic factor smaller yields a well conditioned submatrix, see for example, [14]. In our numerical experiment, we choose rectangular submatrices of specified sizes uniformly at random and report the maximal condition number of several draws in Figure 4 (middle). Going away from the square case | W | = | K | $$ \mid W\mid =\mid K\mid $$ , these condition numbers are small.

The very same phenomenon can be seen numerically for Erdős-Rényi graphs as defined in Section 4.5. In our numerical experiment, we fix an edge probability p = 0 . 1 $$ p=0.1 $$ and draw one graph at random. For its adjacency matrix, we compute the eigenvector matrix U $$ U $$ , choose rectangular submatrices of specified sizes uniformly at random and report the maximal condition number of several draws in Figure 4 (right). Repeating the same experiment with different edge probabilities in [ 0 . 1 , 0 . 9 ] $$ \left[0.1,0.9\right] $$ yields basically the same result.

Details are in the caption following the image
We show log 10 ( cond ( U W , K ) ) $$ {\log}_{10}\left(\mathrm{cond}\left({U}_{W,K}\right)\right) $$ with respect to s = | K | { 1 , , n } $$ s=\mid K\mid \in \left\{1,\dots, n\right\} $$ on the x $$ x $$ -axis and m = | W | { s , , n } $$ m=\mid W\mid \in \left\{s,\dots, n\right\} $$ on the y $$ y $$ -axis. Left: Contiguous submatrices U W , K $$ {U}_{W,K} $$ of the Fourier matrix U = F n $$ U={F}_n $$ for K = [ s ] $$ K=\left[s\right] $$ , W = [ m ] $$ W=\left[m\right] $$ . Middle: Random submatrices U W , K $$ {U}_{W,K} $$ of the Fourier matrix U = F n $$ U={F}_n $$ . Right: Random submatrices U W , K $$ {U}_{W,K} $$ of the eigenvector matrix for a single Erdős-Rényi graphs with edge probability p = 0 . 1 $$ p=0.1 $$ . In the last two cases, K [ n ] s $$ K\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{s}\right) $$ and W [ n ] m $$ W\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{m}\right) $$ are chosen uniformly at random and the maximum condition number over 100 trials is shown.

Both, Examples 2.2 and 2.4 can be reproduced by the julia notebooks https://github.com/taemmrich/Galoisgroups.

3 RESTRICTED EIGENFUNCTIONS AND ALGEBRAIC METHODS

In this section, we consider the generalization of the algebraic uncertainty principle [35] from the circle graph of prime length to certain classes of random graphs. We start by formulating an equivalent condition on the minors of a matrix.

Definition 3.1.We say that a matrix U n × n $$ U\in {\mathbb{R}}^{n\times n} $$ is Chebotarev, if none of its minors vanishes, that is, for any sets K , W [ n ] $$ K,W\subseteq \left[n\right] $$ , | K | = | W | $$ \mid K\mid =\mid W\mid $$ , we have

det U W , K 0 . $$ \det \left({U}_{W,K}\right)\ne 0. $$

The classical example is the Fourier matrix of prime size [34, 35], which we discuss in Section 3.3. Moreover, any Vandermonde-matrix with positive nodes is Chebotarev, see [14, Thm. A.25]. Since the product of all possible above determinants is a polynomial in the entries of the matrix U $$ U $$ , any sufficiently random matrix is Chebotarev almost surely. For the eigenvector matrix U $$ U $$ of specific random matrices A $$ A $$ this property is discussed in Theorems 3.5 and 3.7. Broadly speaking eigenvectors are algebraic vectors, which means they are polynomial expressions in the eigenvalues. The Galois group of the eigenvalues measures how algebraically independent they are, see [39, Section 2].

3.1 Algebraic uncertainty principle

As said above, the poperty that no minor of a given matrix vanishes is equivalent to the following support size uncertainty principle.

Theorem 3.2. (e.g., [[35], Proof of Thm. 1.1])Let U n × n $$ U\in {\mathbb{C}}^{n\times n} $$ . Then the following are equivalent:

  • 1.

    U $$ U $$ is Chebotarev,

  • 2.

    U $$ {U}^{\ast } $$ is Chebotarev,

  • 3.

    U f ^ 0 + f ^ 0 n + 1 $$ {\left\Vert U\hat{f}\right\Vert}_0+{\left\Vert \hat{f}\right\Vert}_0\ge n+1 $$ for all vectors f ^ 0 $$ \hat{f}\ne 0 $$ in (2.1),

  • 4.

    f 0 + U f 0 n + 1 $$ {\left\Vert f\right\Vert}_0+{\left\Vert {U}^{\ast }f\right\Vert}_0\ge n+1 $$ for all functions f 0 $$ f\ne 0 $$ in (2.3).

Proof.First note that det ( U W , K ) = det ( U W , K ) $$ \det \left({U}_{W,K}\right)=\det \left(\underset{W,K}{\overset{\ast }{U}}\right) $$ and by switching the roles of U $$ U $$ and U $$ {U}^{\ast } $$ , it thus remains to prove the equivalence of (i) and (iii). Assume det ( U W , K ) = 0 $$ \det \left({U}_{W,K}\right)=0 $$ for some W , K [ n ] s $$ W,K\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{s}\right) $$ . Then there are coefficients f ^ k $$ {\hat{f}}_k\in \mathbb{C} $$ such that k K f ^ k u k , W = 0 $$ {\sum}_{k\in K}{\hat{f}}_k{u}_{k,W}=0 $$ . Hence the vector f ^ n $$ \hat{f}\in {\mathbb{C}}^n $$ with

f ^ k = f ^ k , if k K , 0 , otherwise , $$ {\hat{f}}_k=\left\{\begin{array}{ll}{\hat{f}}_k,\kern1em & \mathrm{if}\kern0.3em k\in K,\\ {}0,\kern1em & \mathrm{otherwise},\end{array}\right. $$
has support size at most s $$ s $$ and f = U f ^ $$ f=U\hat{f} $$ has support size at most n s $$ n-s $$ . In total, we have
f ^ 0 + f 0 s + ( n s ) = n . $$ {\left\Vert \hat{f}\right\Vert}_0+{\left\Vert f\right\Vert}_0\le s+\left(n-s\right)=n. $$
Now assume f 0 + f ^ 0 n $$ {\left\Vert f\right\Vert}_0+{\left\Vert \kern0.15em \hat{f}\right\Vert}_0\le n $$ and let W = supp ( f ) [ n ] $$ \varnothing \subsetneq W=\operatorname{supp}\left(\kern0.15em f\right)\subsetneq \left[n\right] $$ and K = supp ( f ^ ) [ n ] $$ \varnothing \subsetneq K=\operatorname{supp}\left(\kern0.15em \hat{f}\right)\subsetneq \left[n\right] $$ denote the support of the vectors, respectively. The equation f = U f ^ $$ f=U\hat{f} $$ then implies that 0 = U [ n ] W , K f ^ K $$ 0={U}_{\left[n\right]\setminus W,K}{\hat{f}}_K $$ which gives rise to a vanishing minor of U $$ U $$ .

3.2 Sampling and uniqueness

An implication of being Chebotarev is the following well-known result on necessary and sufficient numbers of samples for the uniqueness of the frequency-sparse reconstruction problem (2.2).

Theorem 3.3. (e.g., [[14], Thm. 2.13])Let n , s $$ n,s\in \mathbb{N} $$ , s n 2 $$ s\le \frac{n}{2} $$ , then:

  • 1.

    For every W V $$ W\subset V $$ , | W | 2 s 1 $$ \mid W\mid \le 2s-1 $$ , and K f [ n ] s $$ {K}_f\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{s}\right) $$ , there exist a set K h [ n ] s $$ {K}_h\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{s}\right) $$ , and coefficients f ^ , ĥ n $$ \hat{f},\hat{h}\in {\mathbb{R}}^n $$ with supp f ^ K f $$ \operatorname{supp}\hat{f}\subset {K}_f $$ , supp ĥ K h $$ \operatorname{supp}\hat{h}\subset {K}_h $$ , and frequency-sparse functions f $$ f $$ and h $$ h $$ , see (2.1), with

    f h but f W = h W . $$ f\ne h\kern1em \mathrm{but}\kern1em {f}_W={h}_W. $$

  • 2.

    If the eigenvector matrix U n × n $$ U\in {\mathbb{C}}^{n\times n} $$ is Chebotarev, then for any W V $$ W\subset V $$ , | W | 2 s $$ \mid W\mid \ge 2s $$ , the samples at W $$ W $$ uniquely determine any s $$ s $$ -frequency-sparse signal, that is,

    f W = h W implies f = h . $$ {f}_W={h}_W\kern1em \mathrm{implies}\kern1em f=h. $$

Proof.Without loss of generality, let K f K h = $$ {K}_f\cap {K}_h=\varnothing $$ and consider the restricted eigenvector matrix

U W , K f K h | W | × 2 s $$ {U}_{W,{K}_f\cup {K}_h}\in {\mathbb{C}}^{\mid W\mid \times 2s} $$
which has a nontrivial kernel in the first case and has full column rank by assumption in the second case.

By means of Theorem 3.2, the analogous result holds for the vertex-sparse reconstruction problem (2.4).

3.3 Circle graph

The (counter-clockwise) directed circle graph on n $$ n $$ vertices has adjaceny matrix à $$ \overset{\widetilde }{A} $$ , given by
à k , = 1 , if k = 1 mod n , 0 , otherwise , $$ {\overset{\widetilde }{A}}_{k,\ell }=\left\{\begin{array}{ll}1,\kern1em & \mathrm{if}\kern0.3em k=\ell -1\kern0.3em \operatorname{mod}\kern0.3em n,\\ {}0,\kern1em & \mathrm{otherwise},\end{array}\right. $$
the undirected circle graph, see Figure 5 for an illustration, has adjacency matrix A = Ã + Ã $$ A=\overset{\widetilde }{A}+{\overset{\widetilde }{A}}^{\top } $$ and Laplacian matrix L = 2 I A $$ L=2I-A $$ .
Details are in the caption following the image
The circle graph.
All three matrices, Ã $$ \overset{\widetilde }{A} $$ , A $$ A $$ and L $$ L $$ , are diagonalized by the unitary Fourier matrix
U = F n = 1 n e 2 π i k j / n k , j [ n ] . $$ U={F}_n=\frac{1}{\sqrt{n}}{\left({\mathrm{e}}^{2\pi ikj/n}\right)}_{k,j\in \left[n\right]}. $$ ()

If n $$ n $$ is prime, then a classical result by Chebotarev [34] asserts that no minor of U $$ U $$ vanishes, which, by Theorem 3.2, is equivalent to the uncertainty principle, see also [23, 35]. By Theorem 3.3 (ii), any s $$ s $$ -sparse signal is uniquely defined when at least 2 s $$ 2s $$ arbitrary samples are known, both for vertex sampling of frequency-sparse functions (2.1) and frequency sampling of vertex-sparse functions (2.3). This proves Theorem 2.1 (i).

If n $$ n $$ is arbitrary, the situation is more involved. A well-known specific example for n = s 2 $$ n={s}^2 $$ is provided by the Dirac-comb, where
f ^ k = 1 , if s divides k , 0 , otherwise, $$ {\hat{f}}_k=\left\{\begin{array}{ll}1,\kern1em & \mathrm{if}\kern0.3em s\kern0.3em \mathrm{divides}\kern0.3em k,\\ {}0,\kern1em & \mathrm{otherwise},\end{array}\right. $$
fulfills f = U f ^ = f ^ $$ f=U\hat{f}=\hat{f} $$ and thus most sparse sampling sets W V = [ n ] $$ W\subset V=\left[n\right] $$ yield f W = 0 $$ {f}_W=0 $$ albeit f 0 $$ f\ne 0 $$ . However, if n $$ n $$ is arbitrary and m 2 s $$ m\ge 2s $$ consecutive vertices are selected (without loss of generality W = { 1 , , m } $$ W=\left\{1,\dots, m\right\} $$ ), then every s $$ s $$ -sparse signal can be recovered by subspace methods like Prony's method [8], the matrix pencil method [17], or ESPRIT [29]. These methods constructively prove Theorem 2.1 (iii).

Stability of these problems is guaranteed if and only if the support K = supp f ^ $$ K=\operatorname{supp}\hat{f} $$ is well-separated with respect to the wrap around distance, that is, min k , K , k min r | k + r n | > n / m $$ {\min}_{k,\ell \in K,k\ne \ell }{\min}_{r\in \mathbb{Z}}\mid k-\ell + rn\mid >n/m $$ , see for example, [2, 20]. More advanced sampling strategies have been developed, see for example, [15, 19], and we discuss random sampling in Section 4.3.

3.4 Directed Erdős-Rényi graphs with loops

For our first class of random graphs the adjacency matrix is a random matrix with independent entries. The Galois group of the characteristic polynomial of such a random matrix has been studied in [10] and will be used subsequently.

Definition 3.4.For n $$ n\in \mathbb{N} $$ and a parameter p ( 0 , 1 ) $$ p\in \left(0,1\right) $$ we call a graph G $$ G $$ on vertex set [ n ] $$ \left[n\right] $$ a directed Erdős-Rényi graph, if each edge ( v , w ) [ n ] 2 $$ \left(v,w\right)\in {\left[n\right]}^2 $$ is included in G $$ G $$ with probability p $$ p $$ . Note that loops ( v , v ) $$ \left(v,v\right) $$ are also included with probability p $$ p $$ .

Theorem 3.5.Assume the extended Riemann hypothesis and let p ( 0 , 1 ) $$ p\in \left(0,1\right) $$ be fixed. Then with probability at least 1 O ( exp ( c p n 1 / 2 ) ) $$ 1-O\left(\exp \left(-{c}_p{n}^{1/2}\right)\right) $$ , the adjacency matrix of the directed Erdős-Rényi graph, cf. Definition 3.4, is diagonalizable A = U Λ U 1 $$ A=U\Lambda {U}^{-1} $$ with U $$ U $$ being Chebotarev.

Proof.Assuming the extended Riemann hypothesis, the Galois group of the characteristic polynomial of the adjacency matrix contains the alternating group of size n $$ n $$ with the above mentioned probability, see [10, Thm. 1.3].

Following [11, Thm. 2], this implies that A = U Λ U 1 $$ A=U\Lambda {U}^{-1} $$ is diagonalizable with U $$ U $$ being Chebotarev.

Please note that fixing the parameter p ( 0 , 1 ) $$ p\in \left(0,1\right) $$ independent of the graph size n $$ n $$ implies a dense graph in the sense that the expected degree of each vertex is Θ ( p n ) $$ \Theta (pn) $$ . Together with Theorem 3.3, this concludes the proof of Theorem 2.1 (ii).

The two sampling schemes asserted in Theorem 2.1 (iv) for s $$ s $$ -frequency-sparse signals are discussed in [12] for the Laplacian matrix and we summarize the method for the adjacency matrix here as follows: Fixing an arbitrary vertex v V $$ v\in V $$ , we have u k ( v ) 0 $$ {u}_k(v)\ne 0 $$ with high probability by Theorem 3.5 and Definition 3.1 since u k ( v ) $$ {u}_k(v) $$ is a ( 1 × 1 ) $$ \left(1\times 1\right) $$ -submatrix of U $$ U $$ . Analogously to [12, Sect. 3.2], we recursively compute
( A f ) ( v ) = w v ( A 1 ) f ( w ) , A 0 f = f , $$ \left({A}^{\ell }f\right)(v)=\sum \limits_{w\sim v}\left({A}^{\ell -1}\right)f(w),\kern2em {A}^0f=f, $$
and use that these powers yield an exponential sum with respect to the eigenvalues
( A f ) ( v ) = k K α k λ k , α k : = f ^ k u k ( v ) $$ \left({A}^{\ell }f\right)(v)=\sum \limits_{k\in K}{\alpha}_k{\lambda}_k^{\ell },\kern2em {\alpha}_k:= {\hat{f}}_k{u}_k(v) $$ ()
for = 0 , , 2 s 1 $$ \ell =0,\dots, 2s-1 $$ . Following the operator based Prony method in [33], this leads analogously to Theorem 3.2 in [12] to the reconstruction of the support K = supp f ^ $$ K=\operatorname{supp}\hat{f} $$ from the samples f ( w ) $$ f(w) $$ , where dist ( v , w ) 2 s 1 $$ \mathrm{dist}\left(v,w\right)\le 2s-1 $$ .
Instead of solving (3.2) for the nonzero coefficients α k $$ {\alpha}_k\in \mathbb{C} $$ and the active eigenvalues λ k $$ {\lambda}_k\in \mathbb{C} $$ , sampling at neighborhoods of multiple vertices W [ n ] s $$ W\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{s}\right) $$ gives the equations
( A f ) ( v ) = k K α k λ k $$ \left({A}^{\ell }f\right)(v)=\sum \limits_{k\in K}{\alpha}_k{\lambda}_k^{\ell } $$
for = 0 , , s $$ \ell =0,\dots, s $$ and v W $$ v\in W $$ . Following [12, Cor. 3.8] this system is solvable, if det ( U W , K ) 0 $$ \det \left({U}_{W,K}\right)\ne 0 $$ .

3.5 Erdős-Rényi graphs with loops

For our second class of random graphs the adjacency matrix is a random symmetric matrix with independent entries. The irreducibility of its characteristic polynomial is studied in [13, Thm. 1.1] and will be instrumental in the following analysis.

Definition 3.6.For graph size n $$ n\in \mathbb{N} $$ and an edge probability p ( 0 , 1 ) $$ p\in \left(0,1\right) $$ , we define the Erdős-Rényi graph with loops G $$ G $$ on vertex set V = [ n ] $$ V=\left[n\right] $$ via its symmetric adjacency matrix A $$ A $$ . For v , w V $$ v,w\in V $$ , with v w $$ v\le w $$ , each entry A v , w = A w , v { 0 , 1 } $$ {A}_{v,w}={A}_{w,v}\in \left\{0,1\right\} $$ is drawn independently at random, where A v , w = 1 $$ {A}_{v,w}=1 $$ with probability p $$ p $$ .

Theorem 3.7.Assume the extended Riemann hypothesis. Let p ( 0 , 1 ) $$ p\in \left(0,1\right) $$ be fixed and A { 0 , 1 } n × n $$ A\in {\left\{0,1\right\}}^{n\times n} $$ be the adjacency matrix of the Erdős-Rényi graph with parameter p $$ p $$ , cf. Definition 3.6. Then with probability at least 1 2 exp ( c p n 1 / 4 ) $$ \left(1-2\exp \left({c}_p{n}^{1/4}\right)\right) $$ , we have unitary diagonalizable A = U Λ U $$ A=U\Lambda {U}^{\ast } $$ and U $$ U $$ has no zero entry.

Proof.Assuming the extended Riemann hypothesis, by [13, Thm. 1.1], we have that the characteristic polynomials of the matrix A $$ A $$ is irreducible over $$ \mathbb{Q} $$ with the above given probability. Following [11, Thm. 6], this implies that no minor of size 1 vanishes.

As in the second last paragraph of Section 3.4 and (3.2), this proves Theorem 2.1 (v).

Remark 3.8.The Laplacian matrix L ( G ) $$ L(G) $$ of a connected undirected graph G $$ G $$ on n $$ n $$ vertices has rank at most ( n 1 ) $$ \left(n-1\right) $$ and thus the criteria used for the adjacency matrix do not apply. Nevertheless, if the Galois group of the characteristic polynomial contains the alternating group of order ( n 1 ) $$ \left(n-1\right) $$ as a subgroup, there are no vanishing minors (of any size). Similarly, if the nontrivial factor of the characteristic polynomial is irreducible, no eigenvector contains a zero, see [11, Section 5].

4 DELOCALIZED EIGENFUNCTIONS AND COMPRESSIVE SENSING

In this section, we combine compressive sensing results for randomly subsampled bounded orthonormal systems [14, Ch. 12] with recent findings on the delocalization of eigenfunctions for certain random graphs [1, 16, 18, 27]. In contrast to the qualitative concept of non-singular submatrices of the eigenvector matrix from the previous section, our main object of study is the largest modulus entry of the eigenvector matrix.

Definition 4.1.Let U = [ u 1 u n ] n × n $$ U=\left[{u}_1\dots {u}_n\right]\in {\mathbb{C}}^{n\times n} $$ be unitary, where { u k } $$ \left\{{u}_k\right\} $$ are the eigenvectors of the adjacency or the Laplacian matrix of a graph G $$ G $$ , respectively. The quantity

μ U : = max k , v | u k ( v ) | = sup f ^ 0 U f ^ f ^ 1 = sup f 0 U f f 1 $$ {\mu}_U:= \underset{k,v}{\max}\mid {u}_k(v)\mid =\underset{\hat{f}\ne 0}{\sup}\frac{{\left\Vert U\hat{f}\right\Vert}_{\infty }}{{\left\Vert \hat{f}\right\Vert}_1}=\underset{f\ne 0}{\sup}\frac{{\left\Vert {U}^{\ast }f\right\Vert}_{\infty }}{{\left\Vert f\right\Vert}_1} $$
is called graph coherence (with respect to U $$ U $$ ).

We note in passing that in general μ U $$ {\mu}_U $$ is not a graph invariant, since for graphs with eigenvalue multiplicity larger than one different eigenbases lead to different values of μ U $$ {\mu}_U $$ . Trivially, we have 1 / n μ U 1 $$ 1/\sqrt{n}\le {\mu}_U\le 1 $$

4.1 Donoho-Stark uncertainty principle

We infer the following well-known support size uncertainty principle.

Theorem 4.2. (e.g., [[9, 32, 35]])Let U = [ u 1 u n ] n × n $$ U=\left[{u}_1\dots {u}_n\right]\in {\mathbb{C}}^{n\times n} $$ be unitary. Then all f : V $$ f:V\to \mathbb{C} $$ with f = U f ^ 0 $$ f=U\hat{f}\ne 0 $$ obey

f 0 · f ^ 0 μ U 2 . $$ {\left\Vert f\right\Vert}_0\cdotp {\left\Vert \hat{f}\right\Vert}_0\ge {\mu}_U^{-2}. $$

Proof.Let f 2 = f ^ 2 = 1 $$ {\left\Vert f\right\Vert}_2={\left\Vert \hat{f}\right\Vert}_2=1 $$ and set supp ( f ) = W $$ \operatorname{supp}(f)=W $$ and supp ( f ^ ) = K $$ \operatorname{supp}\left(\hat{f}\right)=K $$ . By the Cauchy-Schwarz inequality we have

f ^ k 2 = v W f ( v ) u k ( v ) 2 v W f ( v ) 2 v W u k ( v ) 2 = v W u k ( v ) 2 , $$ {\left|{\hat{f}}_k\right|}^2={\left|\sum \limits_{v\in W}f(v)\overline{u_k(v)}\right|}^2\le \sum \limits_{v\in W}{\left|f(v)\right|}^2\sum \limits_{v\in W}{\left|{u}_k(v)\right|}^2=\sum \limits_{v\in W}{\left|{u}_k(v)\right|}^2, $$
which implies the result by
1 = k K f ^ k 2 k K , v W u k ( v ) 2 f 0 · f ^ 0 · μ U 2 . $$ 1=\sum \limits_{k\in K}{\left|{\hat{f}}_k\right|}^2\le \sum \limits_{k\in K,v\in W}{\left|{u}_k(v)\right|}^2\le {\left\Vert f\right\Vert}_0\cdotp {\left\Vert \hat{f}\right\Vert}_0\cdotp {\mu}_U^2. $$

Remark 4.3.Several variants of this uncertainty principle are known: As recently formulated in [27], the second last sum in the proof of Theorem 4.2 equals U W , K F 2 $$ {\left\Vert {U}_{W,K}\right\Vert}_{\mathrm{F}}^2 $$ and thus

f 0 · f ^ 0 min W , K | W | · | K | : U W , K F 1 . $$ {\left\Vert f\right\Vert}_0\cdotp {\left\Vert \hat{f}\right\Vert}_0\ge \underset{W,K}{\min}\left\{|W|\cdotp |K|:{\left\Vert {U}_{W,K}\right\Vert}_{\mathrm{F}}\ge 1\right\}. $$
For any finite abelian group of size n $$ n $$ and U $$ U $$ its Fourier transform, Theorem 4.2 reads as
f 0 · f ^ 0 n . $$ {\left\Vert f\right\Vert}_0\cdotp {\left\Vert \hat{f}\right\Vert}_0\ge n. $$
Finally note that the inequality of arithmetic and geometric mean yields
f 0 + f ^ 0 2 μ U 1 . $$ {\left\Vert f\right\Vert}_0+{\left\Vert \hat{f}\right\Vert}_0\ge 2{\mu}_U^{-1}. $$
For the circle graph, this reads as f 0 + f ^ 0 2 n $$ {\left\Vert f\right\Vert}_0+{\left\Vert \hat{f}\right\Vert}_0\ge 2\sqrt{n} $$ with equality if n $$ n $$ is a square. Moreover, this inequality can considerably be improved if n $$ n $$ is prime, see Section 3.3. To conclude we want to mention that there are variants of the uncertainty principle, in which the size of the supports is replaced for example, by the Shannon- or Rényi entropies [25].

4.2 Random sampling and stability

Randomly sampling the rows of a structured matrix (equivalently evaluating a system of functions in random points) has proven a successful strategy in compressive sensing, see [14] for a broad overview. Given any function as in (2.1), that is, f : V $$ f:V\to \mathbb{C} $$ , f = k [ n ] f ^ k u k $$ f={\sum}_{k\in \left[n\right]}{\hat{f}}_k{u}_k $$ , with unitary U = [ u 1 u n ] $$ U=\left[{u}_1\dots {u}_n\right] $$ , we have
𝔼 v f ( v ) · u k ( v ) = v V f ( v ) · u k ( v ) = f ^ k .
In particular, we have the ‘expected orthogonality’ 𝔼 v u k ( v ) · u ( v ) = 0 . Classically, the deviation from this expected value decreases by choosing a larger sampling set W [ n ] m $$ W\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{m}\right) $$ of size m $$ m $$ . One explicit parameter that has been used to describe the deviation is the so-called coherence of the matrix U W m × n $$ {U}_W\in {\mathbb{C}}^{m\times n} $$ , which is the maximal inner product of its columns and should not be confused with the graph coherence μ u $$ {\mu}_u $$ . The trivial deterministic upper bound
max k u k , W , u , W max k , v W | u k ( v ) u ( v ) | m μ U 2 $$ \underset{k\ne \ell }{\max}\left|\left\langle {u}_{k,W},{u}_{\ell, W}\right\rangle \right|\kern0.5em \le \underset{k,\ell }{\max}\kern.2em \sum \limits_{v\in W}\mid {u}_k(v){u}_{\ell }(v)\mid \le m{\mu}_U^2 $$
is improved considerably (approximately by a factor m $$ \sqrt{m} $$ ) when choosing W [ n ] m $$ W\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{m}\right) $$ uniformly at random. In our setup, the result in [14, Cor. 12.14] gives
max k u k , W , u , W 4 m n μ U log ( 2 n 2 / ε ) $$ \underset{k\ne \ell }{\max}\left|\left\langle {u}_{k,W},{u}_{\ell, W}\right\rangle \right|\kern0.5em \le 4\sqrt{\frac{m}{n}}{\mu}_U\sqrt{\log \left(2{n}^2/\varepsilon \right)} $$
with probability at least 1 ε $$ 1-\varepsilon $$ . This already yields uniqueness and recovery results for the sparse reconstruction problems (2.2) and (2.4), see for example, [36], [14, Ch. 12] for bounded orthornormal systems in general, and [5, Equation (12)] for a recent application to graph signals (where however normalized columns of U W $$ {U}_W $$ and thus a slightly different coherence parameter are considered).

Studying the so-called restricted isometry property of the matrix U W $$ {U}_W $$ , the following stronger result has been obtained.

Theorem 4.4. ([[14], Corollary 12.38])Let U = [ u 1 u n ] n × n $$ U=\left[{u}_1\dots {u}_n\right]\in {\mathbb{C}}^{n\times n} $$ be unitary and choose a subset of rows W [ n ] m $$ W\in \left(\genfrac{}{}{0ex}{}{\left[n\right]}{m}\right) $$ of size m $$ m $$ uniformly at random. If

m C n μ U 2 s log 4 ( n ) , $$ m\ge Cn{\mu}_U^2s{\log}^4(n), $$
then with probability at least 1 n log 3 n $$ 1-{n}^{-{\log}^3n} $$ any s $$ s $$ -sparse vector f ^ n $$ \hat{f}\in {\mathbb{C}}^n $$ is the unique solution of
min ĥ 1 subject to U W ĥ = U W f ^ . $$ \min {\left\Vert \hat{h}\right\Vert}_1\kern1em \mathrm{subject}\ \mathrm{to}\kern1em {U}_W\hat{h}={U}_W\hat{f}. $$

This result is originally formulated in [6] for the circle graph, and improved and extended in [4, 30] in particular with respect to the exponent of the logarithmic term. Beyond the scope of this paper, the statement can be formulated robust to noise, see [14, Theorem 12.34].

4.3 Circles, hypercubes, paths, and grid graphs

For any n $$ n\in \mathbb{N} $$ , the circle graph in Section 3.3, has an eigenvector matrix being the normalized Fourier matrix U = F n $$ U={F}_n $$ in (3.1). Similarly, the periodic d $$ d $$ -dimensional grid graph with n = n 1 · · n d $$ n={n}_1\cdotp \dots \cdotp {n}_d $$ vertices and adjacency matrix A = A 1 A d $$ A={A}_1\otimes \dots \otimes {A}_d $$ , where A $$ {A}_{\ell } $$ denotes the adjacency matrix of the circle graph with n $$ {n}_{\ell } $$ vertices, has U = F n 1 F n d $$ U={F}_{n_1}\otimes \dots \otimes {F}_{n_d} $$ as an eigenvector matrix ( $$ \otimes $$ denoting the Kronecker/tensor product). If all n 3 $$ {n}_{\ell}\ge 3 $$ , then this periodic grid graph is 2 d $$ 2d $$ -regular. For n 1 = = n d = 2 $$ {n}_1=\dots ={n}_d=2 $$ the graph simplifies to the hypercube and the eigenvector matrix U = F 2 F 2 $$ U={F}_2\otimes \dots \otimes {F}_2 $$ is called normalized Walsh-Hadamard matrix. In all cases, we have μ U = 1 / n $$ {\mu}_U=1/\sqrt{n} $$ .

The path graph, cf. Figure 6, on n 3 $$ n\ge 3 $$ vertices has the edge set E = { { v , v + 1 } : v = 1 , n 1 } $$ E=\left\{\left\{v,v+1\right\}:v=1,\dots n-1\right\} $$ . The orthonormal eigenvectors of the adjacency matrix are
u k ( v ) = 2 n + 1 · sin π k v n + 1 $$ {u}_k(v)=\sqrt{\frac{2}{n+1}}\cdotp \sin \left(\frac{\pi kv}{n+1}\right) $$
which yields μ U = 2 / ( n + 1 ) $$ {\mu}_U=\sqrt{2/\left(n+1\right)} $$ . Corresponding d $$ d $$ -dimensional grid graphs thus fulfill μ U < 2 d / 2 / n $$ {\mu}_U<{2}^{d/2}/\sqrt{n} $$ . In all cases, Theorem 4.4 applies with a number of samples m C s log 4 n $$ m\ge Cs{\log}^4n $$ and this proves Theorem 2.3 (i).
Details are in the caption following the image
The path graph.

4.4 Random d $$ d $$ -regular graphs

Our third class of considered random graphs has a sparse symmetric adjacency matrix with a constant number of nonzero entries in each row (and column). With large probability, the orthonormal eigenvectors of such matrices are strongly delocalized, that is, the graph coherence is minimal up to logarithmic factors in the number of vertices.

Definition 4.5.For n $$ n\in \mathbb{N} $$ and a degree d $$ d\in \mathbb{N} $$ we call G $$ G $$ a d $$ d $$ -regular random graph, if the graph is drawn uniformly at random from the set of all graphs with n $$ n $$ vertices having the same degree d $$ d $$ . Loops are forbidden.

Theorem 4.6. ([[18, 27]])Let the degree d 4 $$ d\ge 4 $$ be fixed. Then there are constants c 1 , c 2 > 0 $$ {c}_1,{c}_2>0 $$ , both dependent on d $$ d $$ , such that with probability at least 1 O ( n c 1 ) $$ 1-O\left({n}^{-{c}_1}\right) $$ , the adjacency matrix of a d $$ d $$ -regular random graph, cf. Definition 4.5, is unitarily diagonalizable A = U Λ U $$ A=U\Lambda {U}^{\ast } $$ with

μ U log c 2 n n . $$ {\mu}_U\le \frac{\log^{c_2}n}{\sqrt{n}}. $$

Please note that fixing the degree d 4 $$ d\ge 4 $$ independent of the graph size n $$ n $$ implies a sparse graph in the sense that each row of the adjacency matrix A { 0 , 1 } n × n $$ A\in {\left\{0,1\right\}}^{n\times n} $$ contains exactly d $$ d $$ non-zero entries. Theorem 4.4 applies with a number of samples m C s log 2 c 2 + 4 n $$ m\ge Cs{\log}^{2{c}_2+4}n $$ and large probability 1 O ( n c ˜ 1 ) $$ 1-O\left({n}^{-{\tilde{c}}_1}\right) $$ , and this proves Theorem 2.3 (ii).

4.5 Classical Erdős-Rényi graphs

Finally, our last class of random graphs are classical Erdős-Rényi graphs. It is well-known that these graphs are connected if the edge probability is larger than some critical value. For an edge probability at least a small constant larger, the orthonormal eigenvectors are again delocalized.

Definition 4.7.For graph size n $$ n\in \mathbb{N} $$ and an edge probability p $$ p $$ , the Erdős-Rényi graph G $$ G $$ is a graph with vertices V = [ n ] $$ V=\left[n\right] $$ where each edge { v , w } V 2 $$ \left\{v,w\right\}\in \left(\genfrac{}{}{0ex}{}{V}{2}\right) $$ is drawn independently with probability p $$ p $$ . Loops are forbidden.

Theorem 4.8. ([[1, 16]])For any κ > 0 $$ \kappa >0 $$ and edge probability

p 1 log 4 1 + κ log n n $$ p\ge \left(\frac{1}{\log 4-1}+\kappa \right)\frac{\log n}{n} $$
and with probability at least 1 o ( 1 ) $$ 1-o(1) $$ , the adjacency matrix of an Erdős-Rényi, cf. Definition 4.7, is unitarily diagonalizable A = U Λ U $$ A=U\Lambda {U}^{\ast } $$ with
μ U 2 n 1 + κ . $$ {\mu}_U^2\le {n}^{-1+\kappa }. $$

Theorem 4.4 applies with a number of samples m C s n κ log 4 n $$ m\ge Cs{n}^{\kappa }{\log}^4n $$ and moderately large probability 1 o ( 1 ) $$ 1-o(1) $$ , and this proves Theorem 2.3 (iii).

Remark 4.9.The results in [1, 16] also allow to improve the probability 1 o ( 1 ) $$ 1-o(1) $$ to 1 O ( n c 1 ) $$ 1-O\left({n}^{-{c}_1}\right) $$ if p > c 2 log ( n ) / n $$ p>{c}_2\log (n)/n $$ where c 2 $$ {c}_2 $$ is somewhat larger than the critical constant in above statement. Moreover, the bound on the maximum norm of the eigenfunctions has been improved to log c 3 / n $$ {\log}^{c_3}/\sqrt{n} $$ , at least for closely related random matrices and eigenfunctions to eigenvalues which are bounded away from the extremal eigenvalues, see [3].

5 SUMMARY AND OUTLOOK

We formulated sparse recovery results for graph signals and used recent results on the irreducibility of the characteristic polynomial of the adjacency matrix as well as delocalization results on the eigenvectors of this matrix. Of course, it is more than tempting to ask for a more complete picture.

With respect to methods, this seems open even for the circle graph of prime length where the standard Fourier matrix has all submatrices of full rank for algebraic reasons [35] and well conditioned submatrices uniformly in all column subset for slightly oversampled randomly selected rows [6]. Regarding the underlying combinatorical structure, the above results easily generalize to other graphs and hypergraphs or simplicial complexes as soon as their eigenvector matrix is understood well enough.

ACKNOWLEDGMENT

Open Access funding enabled and organized by Projekt DEAL.

    DATA AVAILABILITY STATEMENT

    The data that support the findings of this study are openly available in Galoisgroups at https://github.com/taemmrich/Galoisgroups.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.