Directional interpolation is a fast and efficient compression technique for high-frequency Helmholtz boundary integral equations, but requires a very large amount of storage in its original form. Algebraic recompression can significantly reduce the storage requirements and speed up the solution process accordingly. During the recompression process, weight matrices are required to correctly measure the influence of different basis vectors on the final result, and for highly accurate approximations, these weight matrices require more storage than the final compressed matrix. We present a compression method for the weight matrices and demonstrate that it introduces only a controllable error to the overall approximation. Numerical experiments show that the new method leads to a significant reduction in storage requirements.
1 INTRODUCTION
We consider boundary element discretizations of the Helmholtz equation
with the wave number on a domain . Using the fundamental solution
(1)
the boundary integral formulation leads to an equation of the form
(2)
that allows us to compute the Neumann boundary values on from the Dirichlet boundary values. Once we know both, the solution can be evaluated anywhere in the domain .
In order to solve the integral equation (2), we employ a Galerkin discretization: the unknown Neumann values are approximated by a boundary element basis and the Dirichlet values by another, possibly different, basis . The discretization replaces the integral operators by matrices and given by
Both matrices are densely populated in general, and having to store them explicitly would severely limit the resolution and therefore the accuracy of the approximation. Fast multipole methods1-3 employ a special approximation of the kernel function that can be evaluated efficiently using FFT-like algorithms to significantly improve the speed of the matrix-vector product.
Local low-rank approximations offer an attractive solution without the need for special expansion: if the kernel function can be approximated by a tensor product, the corresponding part of the matrix can be approximated by a low-rank matrix, and keeping this matrix in factorized form will significantly reduce the storage requirements.
Directional interpolation4-9 offers a particularly convenient approach: the kernel function is split into a plane wave and a smooth remainder, and interpolation of the remainder yields a tensor-product approximation of the kernel function. Advantages of this approach include ease of implementation and very robust convergence properties. A major disadvantage is the large amount of storage required by this approximation.
Fortunately, this disadvantage can be overcome by combining the analytical approximation with an algebraic recompression10, 11 to significantly reduce the storage requirements and improve the speed of matrix-vector multiplications at the expense of some additional work in the setup phase. In order to guarantee the quality of the recompression, the algorithm relies on weight matrices that describe how “important” different basis vectors are for the final approximation.
If we are considering problems with high wave numbers and high resolutions, these weight matrices may require far more storage than the final result of the compression, that is, we may run out of storage even if the final result would fit a given computer system.
To solve this problem, we present an algorithm for replacing the exact weight matrices by compressed weight matrices. The key challenge is to ensure that the additional errors introduced by this procedure can be controlled and do not significantly reduce the accuracy of the final result of the computation.
The following Section 2 introduces the structure of -matrices used to represent operators for high-frequency Helmholtz boundary integral equations. Section 3 shows how algebraic compression can be applied to significantly reduce the storage requirements of -matrices. Section 4 is focused on deriving error estimates for the compression. In Section 5, we introduce an algorithm for approximating the weight matrices while preserving the necessary accuracy. Section 6
contains numerical results indicating that the new method preserves the convergence rates of the underlying Galerkin discretization.
2 -MATRICES
Integral operators with smooth kernel functions can be handled efficiently by applying interpolation to the kernel function, since this gives rise to a low-rank approximation.
The kernel function of the high-frequency Helmholtz equation is oscillatory and therefore not well-suited for interpolation. The idea of directional interpolation7-9 is to split the kernel function into an oscillatory part that can be approximated by a plane wave and a smooth part that can be approximated by interpolation.
2.1 Directional interpolation
To illustrate this approach, we approximate the oscillatory part for , , where are star-shaped subsets with respect to centers and . A Taylor expansion of around yields
Inserted into the exponential function, the first two terms on the right-hand side correspond to a plane wave. In order to ensure that this plane wave is a reasonably good approximation of the spherical wave appearing in the kernel function, we have to bound the integral term. Using the diameter and distance given by
the third term is bounded if
(3a)
holds with a suitable parameter . In terms of our kernel function, this means that we can approximate the spherical wave by the plane wave traveling in direction .
Since depends on and , we would have to use different directions for every pair of subdomains, and this would make the approximation too expensive. To keep the number of directions under control, we restrict ourselves to a fixed set of unit vectors and approximate by an element . If we can ensure
(3b)
with a parameter , the spherical wave divided by the plane wave
will still be sufficiently smooth, and the modified kernel function
will no longer be oscillatory. If the domains and are small compared to the wave length, (3b) allows us to choose and to improve the efficiency of our implementation. In order to interpolate this function, we also have to keep its denominator under control. This can be accomplished by requiring
(3c)
If the three admissibility conditions (3a), (3b), and (3c) hold, standard tensor interpolation of converges at a robust rate.6, 7 To summarize, (3a) allows us to approximate the spherical wave by a plane wave, (3b) allows us to limit the number of directions this plane wave can travel in, and (3c) ensures that we are sufficiently far away from the singularity to be able to approximate by a polynomial.
We choose interpolation points with corresponding Lagrange polynomials in the subdomain and interpolation points with corresponding Lagrange polynomials in the subdomain and approximate by the interpolating polynomial
In order to obtain an approximation of the original kernel function , we have to multiply by the plane wave and get
with the modified Lagrange functions
where we exploit
2.2 -matrices
To obtain an approximation of the entire matrix , we have to partition its index set into subsets where our approximation can be used. Here, denotes the index set of the boundary element basis functions introduced before. We focus here on the matrix for the sake of simplicity, but state that the entire construction can be extended to handle the rectangular matrix in a very similar way.
Definition 1. (cluster tree)Let be a finite tree, and let each of its nodes be associated with a subset .
is called a cluster tree for the index set if
the root is associated with ,
for all with children, we have
for all , , we have
A cluster tree for is usually denoted by . Its leaves are denoted by .
A cluster tree provides us with a hierarchy of subsets of the index set , and its leaves define a disjoint partition of . In order to define approximations for the matrix , we require a similar tree structure with subsets of .
Definition 2. (block tree)Let be a finite tree. It is called a block tree for the cluster tree if
for all there are with ,
the root is given by ,
for all with we have
A block tree for is usually denoted by . Its leaves are denoted by .
Leaves satisfying (3) are called admissible and collected in the set . Its complement are the inadmissible leaves .
The definition implies that a block tree for is indeed a cluster tree for the index set , and therefore the leaves of a block tree describe a disjoint partition of , that is, a decomposition of into submatrices for all .
We expect to be able to efficiently approximate those blocks that satisfy the admissibility conditions (3). In order to keep the number of blocks small, we split only those blocks that do not satisfy these conditions. This approach ensures that we obtain the minimal number of blocks and therefore the most efficient representation.
According to References 6 and 7, the admissiblity conditions (3) guarantee that the directional interpolation converges exponentially with respect to on the domain . In order to be able to apply our approximation to submatrices, we have to take the supports of the basis functions into account. Since we will be using tensor interpolation, we choose for every cluster an axis-parallel bounding box such that
For every cluster , we denote the corresponding bounding box by . If we have a block with bounding boxes and satisfying the admissibility conditions (3), we can expect the approximation
for a suitably chosen direction , cf. (3b), to converge rapidly and therefore
(4)
This means that the submatrices corresponding to the leaves
can be approximated by low-rank matrices in factorized form.
We can satisfy the admissibility condition (3b) only if large clusters are accompanied by a large number of directions to choose from.
Definition 3. (Directions)Let be a cluster tree. For every cluster we let either or choose a subset such that
The family is called a family of directions for the cluster tree .
Allowing makes algorithms more efficient for small clusters where (3b) can be fulfilled by choosing . In this case, the function becomes simply the Lagrange polynomial , and the modified kernel function becomes just the standard kernel function .
In the high-frequency case, the large clusters require directions to satisfy (3b). This means that storing all matrices would require at least units of storage, where denotes the number of basis functions, and this would not be an improvement over simply storing the matrix explicitly. This problem can be overcome by taking advantage of the fact that we can approximate in terms of the matrices corresponding to its children: let and denote the interpolations points and Lagrange polynomials for the subdomain corresponding to the child cluster . If we use the same polynomial order for all clusters, we have
by the identity theorem, and interpolating a slightly modified function instead yields
that is, we can approximate modified Lagrange polynomials on parent clusters by modified Lagrange polynomials in their children. Under moderate conditions, this approximation can be applied repeatedly without harming the total error too much,6, 7 so we can afford to replace the matrices defined in (4) in all nonleaf clusters by approximations.
Definition 4. (Directional cluster basis)Let be a cluster tree with a family of directions. A family of matrices is called a directional cluster basis for and if for every and there are a direction and a matrix with
(5)
The matrices are called transfer matrices. Since the matrices have to be stored only for clusters without children, they are called leaf matrices.
The matrices here correspond to the matrices introduced in (4) only for the leaves, while for nonleaf clusters the Lagrange polynomials are replaced by nested interpolation.
Definition 5. (-matrix)Let be a cluster tree with a family of directions, let be a directional cluster basis, and let be a block tree.
A matrix is called a -matrix if for every admissible leaf there are a direction and a matrix with
(6)
The matrix is called a coupling matrix for the block .
The mappings and can be defined by taking the best approximation of in in the first case and the best approximation of the direction from the center of to the center of in in the second case.
If we have a -matrix, all admissible blocks can be represented by the cluster basis and the coupling matrices. For the inadmissible leaves , we store the corresponding submatrices explicitly. Under moderate assumptions,10 these nearfield matrices require only units of storage in the low-frequency case, where the size of the leaf clusters is chosen proportional to to balance far- and nearfield storage, and in the high-frequency case units of storage, where we have to ensure that leaf clusters are small enough to resolve waves.
For constant wave numbers , a -matrix approximation of requires only units of storage. In the high-frequency case, that is, if , a -matrix approximation requires only units of storage.7, 10 The matrix-vector multiplication can be performed in a similar complexity since it requires not more than two arithmetic operations per stored -matrix coefficient.
3 COMPRESSION OF -MATRICES
Although directional interpolation leads to a robust and fairly fast algorithm with a complexity in for approximating the matrix , taking into account numerical quadrature, it requires a very large amount of storage, particularly if we are interested in a highly accurate approximation: Figure 1 shows that directional interpolation converges very robustly, but also that interpolation of higher order requires a very large amount of memory, close to 1 TB for the eighth order.
Left: Convergence of directional interpolation with increasing order. Right: Complete storage requirements (basis and farfield plus nearfield) with increasing order. 32,768 basis functions, .
Algebraic compression techniques offer an attractive solution: we use directional interpolation only to provide an intermediate approximation of and apply algebraic techniques to reduce the rank as far as possible. The resulting re-compression algorithm can be implemented in a way that avoids having to store the entire intermediate -matrix, so that only the final result is written to memory and very large matrices can be handled at high accuracies.
We present here a version of the algorithm introduced in Reference 11 that will be modified in the following sections. Our goal is to find an improved cluster basis for the matrix . In order to avoid redundant information and to ensure numerical stability, we aim for an isometric basis, that is, we require
The best approximation of a matrix block with respect to this basis is given by the orthogonal projection , and we have to ensure that all blocks connected to the cluster and the direction are approximated. We introduce the sets
(7)
containing all column clusters connected via an admissible block to a row cluster and a given direction , cf. dark blue part in Figure 2, and note that
is a minimal requirement for our new basis. But it is not entirely sufficient: if has children, Definition 4
requires that can be expressed in terms of for and , therefore the basis has to be able to approximate all admissible blocks connected to the ancestors of , as well. To reflect this requirement, we extend to
(8)
by including all admissible blocks connected to the parent, light blue part in Figure 2, and by induction to any of its ancestors.
Directional farfield clusters (dark blue) and ancestores (light blue) in horizontal direction.
A suitable cluster basis satisfies
By combining all of these submatrices in a large matrix
we obtain the equivalent formulation
and the singular value decompositions of can be used to determine optimal isometric matrices with this property. The resulting algorithm, investigated in Reference 10, has quadratic complexity, since it does not take the special structure of into account.
If is already approximated by a -matrix, for example, via directional interpolation, we can make this algorithm considerably more efficient. We start by considering the root of . Let . Since is a -matrix, we have
and enumerating yields
The optimized cluster basis depends only on the singular values and left singular vectors of , not on the right singular vectors. In the factorized representation we have now obtained, this means that we can apply Householder reflections to condense the right factor into a small matrix without changing the resulting matrix . Using the transformations directly, however, is too computationally expensive, so we are looking for way to avoid it.
Definition 6. (basis weights)A family of matrices is called a family of basis weights for the basis if for every and there is an isometric matrix with
and the matrices have each columns and at most rows.
If we have basis weights at our disposal, we obtain
and since the multiplication by an adjoint isometric matrix from the right does not change the singular values or left singular vectors, we can replace with
We can even go one step further and compute a thin Householder factorization of the right factor's adjoint
with an isometric matrix and a matrix that has only columns and not more than rows. If we set
we obtain
and can drop the rightmost adjoint isometric matrix to work just with the thin matrix that has only at most columns, since multiplication by an isometric matrix from the right does not change the resulting matrix .
So far, we have only considered the root of the cluster tree. If is a nonroot cluster, it has a parent and our definition (8) yields
Let with . If we assume that has already been computed, we have
To apply this procedure to all directions , we enumerate them as
and the admissible blocks again as to get
The rightmost factors are again isometric, and we can once more compute a thin Householder factorization
to obtain
Since the isometric matrices do not influence the range of , we do not have to compute them, we only need the weight matrices .
Definition 7. (total weights)A family of matrices is called a family of total weights for the -matrix if for every and there is an isometric matrix with
(9)
and the matrices have each columns and at most rows.
Remark 1. (Symmetric total weights)In the original approximation constructed by directional interpolation, the same cluster basis is used for rows and columns, since we have for all admissible blocks .
Since the matrix is not self-adjoint, this property no longer holds for the adaptively constructed basis and we would have to construct a separate basis for the columns by applying the procedure to the adjoint matrix .
A possible alternative is to extend the total weight matrices to handle and simultaneously: for every , we include not only in the construction of the weight , but also . This will give us an adaptive cluster basis that can be used for rows and columns, just like the original. Since the matrices appearing in the Householder factorization are now twice as large, the algorithm will take almost twice as long to complete and the adaptively chosen ranks may increase.
We can compute the total weights efficiently by this procedure as long as we have the basis weights at our disposal. These weights can be computed efficiently by taking advantage of their nested structure: if is a leaf, we compute the thin Householder factorization
with an isometric matrix and a matrix with columns and at most rows.
If has children, we first compute the basis weights for all children by recursion and let
(10)
where for all . We compute the thin Householder factorization
and find
The matrix is the product of two isometric matrices and therefore itself isometric. We can see that we can compute the basis weight matrices using only operations per and as long as we are not interested in . The algorithm is summarized in Figure 3.
and we can again drop the isometric matrix and only have to find the singular value decomposition of , choose a suitable rank and use the first left singular vectors as columns of the matrix . We also prepare the matrix describing the change of basis from to .
If is not a leaf, we first construct the basis for all children . Since the parent can only approximate what has been kept by its children, we can switch to the orthogonal projection
that can be easily computed using the transfer matrices and the basis-change matrices. Once again we can drop the isometric factor and only have to compute the singular value decomposition of , choose again a suitable rank and use the first left singular vectors as columns of a matrix . Using
gives us the new cluster basis, where the transfer matrices can be extracted from . Again it is a good idea to prepare the basis-change matrix for the next steps of the recursion.
Under standard assumptions, the entire construction can be performed in operations for constant wave numbers and operations in the high-frequency case.11 The algorithm is summarized in Figure 4. It is important to note that the total weight matrices can be constructed and discarded during the recursive algorithm, they do not have to be kept in storage permanently. This is in contrast to the basis weight matrices that may appear at any time during the recursion and therefore are kept in storage during the entire run of the algorithm.
Figure 5 shows an experiment with the single-layer matrix with the wave number and piecewise constant basis functions on the unit sphere. The left image illustrates that the error decreases exponentially both with and without recompression. The right image compares the storage requirements for the original matrix (magenta, nearfield, farfield and bases) with those for the recompressed matrix (green, again nearfield, farfield, and bases). The blue curve corresponds to the storage needed for the basis weights, and we can see that it grows beyond the storage for the entire recompressed -matrix if higher polynomial orders are used. This is not acceptable if we want to apply the method to high-frequency problems at high accuracies, so we will now work to reduce the storage requirements for these weights without harming the convergence of the overall method.
Left: convergence of recompressed interpolation with increasing order. Right: storage requirements of original interpolation and recompression.
4 ERROR CONTROL
In order to preserve the convergence properties, we have to investigate how our algorithm reacts to perturbations.
4.1 Error decomposition
We consider an admissible block with that is approximated by our algorithm by
If is a leaf, the approximation error is given by
If is not a leaf, there are children with directions , , and an isometric matrix such that
and the approximation error can be split into
The ranges of both terms are perpendicular: for any pair of vectors we have
due to . By Pythagoras' theorem, this implies
(12)
that is, we can split the error exactly into contributions of the children and a contribution of the parent . If the children have children again, we can proceed by induction. To make this precise, we introduce the sets of descendants
for all and .
Theorem 1. (error representation)We define
In the previous section, we have already definedfor nonleaf clusters. We extend this notation by settingfor all leaf clustersand all . Then we have
and a straightforward induction yields the result.
We can see that the matrices required by this theorem appear explicitly in the compression algorithm: is the combination of all matrices for if is a leaf, and otherwise is the combination of all matrices for .
The compression algorithm computes the singular value decompositions of the matrices and , respectively, so we have all the singular values at our disposal to guarantee or , respectively, for any given accuracy by ensuring that the first dropped singular value is less than .
4.2 Block-relative error control
Although multiple submatrices are combined in , they do not all have to be treated identically12(chapter 6.8): we can scale the different submatrices with individually chosen weights, for example, given and , we can choose a weight for every and replace by
with the enumeration . Correspondingly, is replaced by a weighted version and by . The modified algorithm will now guarantee
for leaf clusters and
for nonleaf clusters. With these modifications, Theorem 1 yields
(13)
The weights can be used to keep the error closely under control. As an example, we consider how to implement block-relative error controls, that is, how to ensure
for a block . We start by observing that we have
due to the admissibility and using the basis weights introduced in Definition 6, so the spectral norm can be computed efficiently.
We assume that a cluster can have at most children and set
Keeping in mind that every cluster can have at most children, substituting in (13) and summing up level by level yields
allowing us to evaluate the geometric sum to conclude
The weights can be computed and conveniently included during the construction of the total weights at only minimal additional cost.
4.3 Stability
In order to improve the efficiency of our algorithm, we would like to replace the -matrix by an approximation. If we want to ensure that the result of the compression algorithm is still useful, we have to investigate its stability. In the following, denotes the matrix treated during the compression algorithm, while denotes the matrix that we actually want to approximate.
Lemma 1. (stability)Let . We have
for allwithand all .
Proof.Let , and . We have
and therefore by the triangle inequality
We let and make use of the isometry of to find
Since this is equivalent with , the proof is complete.
This lemma implies that if we want to approximate a matrix , but apply the algorithm to an approximation satisfying the block-relative error estimate
we will obtain
that is, the basis constructed to ensure block-relative error estimates for the matrix will also ensure block-relative error estimates for the matrix , only with a slightly larger error factor . Since our error-control strategy can ensure any accuracy , this is quite satisfactory.
5 APPROXIMATED WEIGHTS
Figure 5 suggests that for higher accuracies, the basis weights can require more storage than the entire recompressed -matrix. With the error representation of Theorem 1 and the stability analysis of the previous section at our disposal, we can investigate ways to reduce the storage requirements without causing significant harm to the final result.
We do not have to worry about the total weights , since they can be set up during the recursive construction of the adaptive cluster basis.
5.1 Direct approximation of weights
The basis weight matrices are required by our algorithm when it sets up the total weight matrix with
using
for an admissible block . The isometric matrix influences only and can be dropped since it does not influence the singular values or the left singular vectors.
Our goal is to replace the basis weight by an approximation while ensuring that the recompression algorithm keeps working reliably. We find
and conclude that it is sufficient to ensure that the product is a good approximation of the product , we do not require itself to be a good approximation of . This is a crucial observation, because important approximation properties are due to the kernel function represented by , not due to the essentially arbitrary polynomial basis represented by .
Since the basis weight will be used for multiple clusters , we introduce the sets
(14)
in analogy to the sets used for the compression algorithm in (7). Enumerating the elements by leads us to consider the approximation of the matrix
(15)
The optimal solution is again provided by the singular value decomposition of : for the singular values and a given accuracy , we choose a rank such that and combine the first left singular vectors in an isometric matrix . We use the corresponding low-rank approximation and find
The resulting algorithm is summarized in Figure 6. It is important to note that the original basis weights are discarded as soon as they are no longer needed so that the original weights have to be kept in storage only for the children of the current cluster and the children of its ancestors at every point of the algorithm.
For the basis construction algorithm, cf. Figure 4, only the coefficient matrices are required, since the isometric matrices do not influence the singular values and the left singular vectors. Our algorithm only provides these matrices to save storage.
5.2 Block-relative error control
Again, we are interested in blockwise relative error estimates, and as before, we can modify the blockwise approximation by introducing weights and considering
(16)
Replacing by yields
For the blockwise error we obtain
so a relative error bound is guaranteed if we ensure
Evaluating the numerator and denominator exactly would require us again to have the full basis weights at our disposal. Fortunately, a projection is sufficient for our purposes: in a preparation step, we compute the basis weight , find its singular value decomposition and use a given number of left singular vectors to form an auxiliary isometric matrix and store . Since the first singular value corresponds to the spectral norm of , and therefore the spectral norm of , we have
and can evaluate the denominator exactly. Since is an orthogonal projection, we also have
that is, we can find a lower bound for the numerator. Fortunately, a lower bound is sufficient for our purposes, and we can use
The algorithm for constructing the norm-estimation matrices is summarized in Figure 7. It uses exact Householder factorizations for all basis matrices and then truncates . In order to make the computation efficient, we use
(17)
to replace by the projected matrix with and for all .
Figure 8 shows that compressing the basis weights following these principles leaves the accuracy of the matrix intact and significantly reduces the storage requirements.
Left: convergence of recompressed interpolation with compressed weights. Right: storage requirements of compressed and uncompressed weights.
6 NUMERICAL EXPERIMENTS
To demonstrate the properties of the new algorithms in practical applications, we consider direct boundary integral formulations for the Helmholtz problem on the unit sphere. We create a mesh for the unit sphere by starting with a double pyramid and refining each of its faces into triangles, where . Projecting these triangles' vertices to the unit sphere yields regular surface meshes with between and triangles.
We discretize the direct boundary integral formulations for the Dirichlet-to-Neumann and the Neumann-to-Dirichlet problem with piecewise constant basis functions for the Neumann values and continuous linear nodal basis functions for the Dirichlet values. The approximation of the single-layer matrix by a -matrix has already been discussed.
For the double-layer matrix, we apply directional interpolation to the kernel function and take the normal derivative of the result. This again yields an -matrix. The approximation error has been investigated in Reference 6.
For the hypersingular matrix, we use partial integration13(Corollary 3.3.24) and again apply directional interpolation to the remaining kernel function.
In order to save storage, basis weights for the row and column basis of the single-layer matrix and the row basis of the double-layer matrix are shared, and basis weights for the column basis of the double-layer matrix and the row and column basis of the hypersingular matrix are also shared. In our implementation, this can be easily accomplished by including more matrix blocks in the matrices .
The resulting systems of linear equations are solved by a GMRES method that is preconditioned using an -LU factorization14(Chapter 7.6) of a coarse approximation of the -matrix.
Table 1 contains results for a first experiment with the constant wave number . The column “n” gives the number of triangles, the column “m” gives the order of the interpolation, the column “Norm” gives the time in seconds for the approximation of the matrix norm with the algorithm given in Figure 7, the column “Comp” gives the time for the compression of the weights by the algorithm in Figure 6 and “Mem” the storage requirements in MB for the compressed weight matrices.
TABLE 1.
Helmholtz boundary integral equation with constant wave number .
Weight
SLP
DLP
n
m
Norm
Comp
Mem
Row
Col
Mem
Row
Col
Mem
8192
3
0.1
2.6
5
0.2
0.3
319
0.1
0.1
331
18,432
4
0.2
11.5
37
2.0
2.1
588
1.4
1.4
648
32,768
4
0.2
8.7
80
3.0
3.0
943
1.6
2.7
951
73,728
5
0.9
70.5
431
5.7
5.8
2013
4.4
4.3
1792
131,072
5
1.3
101.3
853
9.1
9.4
3594
7.0
6.5
3243
294,912
5
3.6
183.1
2072
20.7
20.9
8234
14.7
13.7
7774
524,288
6
6.5
447.7
7217
71.1
52.6
15,797
44.1
37.4
13,959
1,179,648
6
14.1
824.2
17,091
156.9
119.6
37,949
100.7
71.5
35,281
2,097,152
7
26.1
3310.3
51,516
1062.4
718.8
71,768
602.6
513.3
62,032
4,718,592
7
57.8
7034.6
122,268
2704.3
1858.2
176,761
1606.0
1316.2
162,386
8,388,608
7
101.9
12,041.2
224,330
4821.9
3636.7
332,950
2799.6
2616.1
323,817
The columns “Row” and “Col” give the times in seconds for constructing the adaptive row and column cluster bases, both for the single-layer and the double-layer matrix, while the columns “Mem” give the storage requirements in MB for the compressed -matrices.
The experiment was performed on a server with two AMD EPYC 7713 processors with 64 cores each and a total of 2048 GB of memory.
Our theoretical investigation suggests that the runtime and storage requirements grow like for a constant wave number, and this prediction seems to be confirmed by our experiment. Truncation tolerances were chosen to ensure that the convergence of the original Galerkin discretization is preserved, that is, we obtain -norm errors falling like for the Dirichlet-to-Neumann problem and like for the Neumann-to-Dirichlet problem.
Table 2 contains results for a second experiment with the a wave number that grows as the mesh is refined. This is common in practical applications when the mesh is chosen just fine enough to resolve waves.
TABLE 2.
Helmholtz boundary integral equation with growing wave number.
Weight
SLP
DLP
n
m
Norm
Comp
Mem
Row
Col
Mem
Row
Col
Mem
8192
4
3
0.1
2.3
5
0.8
0.9
319
0.3
0.4
331
18,432
6
4
0.2
12.3
35
3.2
3.2
993
1.7
2.0
1057
32,768
8
4
0.3
33.8
77
7.2
7.4
2295
4.6
4.5
2336
73,728
12
5
0.9
246.4
428
24.4
25.0
7318
19.4
18.7
7507
131,072
16
5
1.6
559.9
966
50.1
47.2
16,888
42.8
37.8
17,170
294,912
24
5
5.6
1579.0
3599
125.5
123.6
43,792
107.8
103.0
45,345
524,288
32
6
13.2
6815.1
14,383
474.0
331.5
94,766
489.1
389.5
93,371
1,179,648
48
6
34.7
19,657.0
44,523
1324.8
1008.2
255,373
1540.6
979.9
259,687
2,097,152
64
7
105.5
84,707.6
174,972
7622.8
6252.1
455,584
8541.0
6106.9
438,976
The growing wave number makes it significantly harder to satisfy the admissibility condition (3a) and thereby leads to an increase in blocks that have to be treated. The admissibility condition (3b) implies that we also have to introduce a growing number of directions as the wave number increases.
The results of our experiment show the expected increase both in computing time and storage requirements: for 2,097,152 triangles, the setup takes far longer than in the low-frequency case, since more than thirteen times as many blocks have to be considered. This is not surprising, since the complexity analysis suggests for the high-frequency case. In this case, a single coupling matrix requires 4 MB of storage, and storing multiple of these matrices during the setup of the matrix can be expected to exceed the capacity of the available cache memory, thus considerably slowing down the computation.
The compression of the weight matrices takes almost as much time as the entire basis construction. Fortunately, the basis construction significantly benefits from the compressed weights and runs twice as fast. This effect is not able to fully compensate the time spent compressing the weight matrices, but it ensures that the new memory-efficient algorithm is competitive with the original version.
Figure 9 illustrates the algorithms' practical performance: the left figure shows the runtime per degree of freedom using a logarithmic scale for the number of triangles. We can see that the runtimes grow like , as predicted, with the slope depending on the order of interpolation.
Left: Runtime per degree of freedom. Right: Storage requirements per degree of freedom.
The right figure shows the storage, again per degree of freedom, and we can see that the compressed -matrices for the three operators show the expected behavior. The compressed weights grow slowly, while the uncompressed weights appear to be set to surpass the storage requirements of the matrices they are used to construct.
ACKNOWLEDGMENTS
Open Access funding enabled and organized by Projekt DEAL.
FUNDING INFORMATION
Part of this research was funded by the Deutsche Forschungsgemeinschaft in project BO 3289/7-1.
CONFLICT OF INTEREST STATEMENT
The authors are not aware of any conflicts of interest concerning this article.
3Greengard L, Rokhlin V. A New Version of the Fast Multipole Method for the Laplace Equation in Three Dimensions, Acta Numerica 1997. Cambridge, Great Britain: Cambridge University Press; 1997. p. 229–269.
7Börm S, Melenk JM. Approximation of the high-frequency Helmholtz kernel by nested directional interpolation: error analysis. Numer Math. 2017; 137(1): 1–34.
8Brandt A. Multilevel computations of integral transforms and particle interactions with oscillatory kernels. Comput Phys Commun. 1991; 65(1–3): 24–38.
9Messner M, Schanz M, Darve E. Fast directional multilevel summation for oscillatory kernels based on Chebyshev interpolation. J Comput Phys. 2012; 231(4): 1175–1196.
Please check your email for instructions on resetting your password.
If you do not receive an email within 10 minutes, your email address may not be registered,
and you may need to create a new Wiley Online Library account.
Request Username
Can't sign in? Forgot your username?
Enter your email address below and we will send you your username
If the address matches an existing account you will receive an email with instructions to retrieve your username
The full text of this article hosted at iucr.org is unavailable due to technical difficulties.