Volume 3, Issue 6 e1145
SPECIAL ISSUE PAPER
Full Access

Parallel framework for dynamic domain decomposition of data assimilation problems: a case study on Kalman Filter algorithm

Luisa D'Amore

Corresponding Author

Luisa D'Amore

Department of Mathematics and Applications, University of Naples Federico II, Naples, Italy

Correspondence Luisa D'Amore, Department of Mathematics and Applications, University of Naples Federico II, Complesso Monte Sant'Angelo, Via Cintia, Naples, Italy.

Email: [email protected]

Search for more papers by this author
Rosalba Cacciapuoti

Rosalba Cacciapuoti

Department of Mathematics and Applications, University of Naples Federico II, Naples, Italy

Search for more papers by this author
First published: 02 January 2021

Abstract

We focus on Partial Differential Equation (PDE)-based Data Assimilation problems (DA) solved by means of variational approaches and Kalman filter algorithm. Recently, we presented a Domain Decomposition framework (we call it DD-DA, for short) performing a decomposition of the whole physical domain along space and time directions, and joining the idea of Schwarz's methods and parallel in time approaches. For effective parallelization of DD-DA algorithms, the computational load assigned to subdomains must be equally distributed. Usually computational cost is proportional to the amount of data entities assigned to partitions. Good quality partitioning also requires the volume of communication during calculation to be kept at its minimum. In order to deal with DD-DA problems where the observations are nonuniformly distributed and general sparse, in the present work we employ a parallel load balancing algorithm based on adaptive and dynamic defining of boundaries of DD—which is aimed to balance workload according to data location. We call it DyDD. As the numerical model underlying DA problems arising from the so-called discretize-then-optimize approach is the constrained least square model (CLS), we will use CLS as a reference state estimation problem and we validate DyDD on different scenarios.

1 INTRODUCTION

Data assimilation (DA, for short) encompasses the entire sequence of operations that, starting from observations/measurements of physical quantities and from additional information—such as a mathematical model governing the evolution of these quantities—improve their estimate minimizing inherent uncertainties. DA problems are usually formulated as an optimization problem where the objective function measures the mismatch between the model predictions and the observed system states, weighted by the inverse of the error covariance matrices.1, 2 In operational DA the amount of observations is insufficient to fully describe the system and one cannot strongly rely on a data driven approach: the model is paramount. It is the model that fills the spatial and temporal gaps in the observational network: it propagates information from observed to unobserved areas. Thus, DA methods are designed to achieve the best possible use of a never sufficient (albeit constantly growing) amount of data, and to attain an efficient data model fusion, in a short period of time. This poses a formidable computational challenge, and makes DA an example of big data inverse problems.3-6 There is a lot of DA algorithms. Two main approaches gained acceptance as powerful methods: variational approach (namely, 3DVAR and 4DVAR) and Kalman filter (KF).7-9 Variational approaches are based on the minimization of the objective function estimating the discrepancy between numerical results and observations. These approaches assume that the two sources of information, forecast and observations, have errors that are adequately described by stationary error covariances. In contrast to variational methods, KF (and its variants) is a recursive filter solving the Euler-Lagrange equations. It uses a dynamic error covariance estimate evolving over a given time interval. The process is sequential, meaning that observations are assimilated in chronological order, and KF alternates a forecast step, when the covariance is evolved, with an analysis step in which covariance of the filtering conditional is updated. In both kind of methods the model is integrated forward in time and the result is used to reinitialize the model before the integration continues. For any details interested readers are referred to References 10-12.

Main operators of any DA algorithm are dynamic model and observation mapping. These are two main components of any variational approach and state estimation prolem, too. In the following, we start considering CLS model, seen as a prototype of any DA model.13 CLS is obtained combining two overdetermined linear systems, representing the state and the observation mapping, respectively. In this regards, in Reference 14 we presented a feasibility analysis on constrained least square (CLS) models of an innovative domain decomposition framework for using CLS in large-scale applications. DD-DA framework, based on Schwarz approach, properly combines localization and partial differential equation (PDE)-based model reduction inheriting the advantages of both techniques for effectively solving any kind of large-scale and/or real-time KF application. It involves decomposition of the physical domain, partitioning of the solution, filter localization and model reduction, both in space and in time. There is a quite different rationale behind the framework and the so-called model order reduction methods (MOR),15 even though they are closely related each other. The primary motivation of Schwarz methods was the inherent parallelism arising from a flexible, adaptive and independent decomposition of the given problem into several subproblems, though they can also reduce the complexity of sequential solvers. Schwarz methods and theoretical frameworks are, to date, the most mature for this class of problems.13, 16-18 MOR techniques are based on projection of the full-order model onto a lower dimensional space spanned by a reduced-order basis. These methods has been used extensively in a variety of fields for efficient simulations of highly intensive computational problems. But all numerical issues concerning the quality of approximation still are of paramount importance.19 As mentioned previously DD-DA framework makes it natural to switch from a full scale solver to a model order reduction solver for solution of subproblems for which no relevant low-dimensional reduced space should be constructed. In the same way, DD-DA framework allows to employ a model reduction in space and time which is coherent with the filter localization. In conclusion, main advantage of the DD-DA framework is to combine in the same theoretical framework model reduction, along the space and time directions, with filter localization, while providing a flexible, adaptive, reliable and robust decomposition. That said, any interest reader who wants to apply DD-DA framework in a real-world application, that is, with a (PDE-based) model state and an observation mapping, once the dynamic (PDE-based) model has been discretized, he should rewrite the state estimation problem under consideration as a CLS model problem (cfr Section 3.1) and then apply DD-DA algorithm. In other words, she/he should follow the discretize-then-optimize approach, common to most DD-DA problems and state estimation problems, before employing the DD-DA framework. Summarizing, main topics of DD-DA framework can be listed as follows.
  1. DD step: we begin by partitioning along space and time the domain into subdomains and then extending each subdomain to overlap its neighbors by an amount. Partitioning can be adapted according to the availability of measurements and data.
  2. Filter Localization and MOR: on each subdomain we formulate a local DA problem analogous to the original one, combining filter localization and MOR approaches.
  3. Regularization constraints: in order to enforce the matching of local solutions on the overlapping regions, local DA problems are slightly modified by adding a correction term. Such a correction term balances localization errors and computational efforts, acting as a regularization constraint on local solutions. This is a typical approach for solving ill posed inverse problems (see, for instance, 33).
  4. Parallel in time: as the dynamic model is coupled with DA operator, at each integration step we employ, as initial and boundary values of all reduced models, estimates provided by the DA model itself, as soon as these are available.
  5. Conditioning: localization excludes remote observations from each analyzed location, thereby improving the conditioning of the error covariance matrices.

To the best of our knowledge, such ab initio space and time decomposition of DA models has never been investigated before. A spatially distributed KF into sensor based reduced-order models, implemented on a sensor network where multiple sensors are mounted on a robot platform for target tracking, is presented in References 20-22.

1.1 Contribution of the present work

The introduction of a dynamic redefining of the DD (we call it DyDD), aimed to efficiently deal with DA problems where observations are nonuniformly distributed and general sparse is the main focus of the present work. Indeed, in such cases a static and/or a priori DD strategy could not ensure a well-balanced workload, while a way to repartition the domain so that subdomains maintain a nearly equal number of observations plays an essential role in the success of any effective DD approach. We present a revision of DD-DA framework such that a dynamic load balancing algorithm allows for a minimal data movement restricted to the neighboring processors. This is achieved by considering a connected graph induced by the domain partitioning whose vertices represent a subdomain associated with a scalar representing the number of observations on that subdomain. Once the domain has been partitioned, a load balancing schedule (scheduling step) should make the load on each subdomain equals to the average load providing the amount of load to be sent to adjacent subdomains (migrations step). The most intensive kernel is the scheduling step which defines a schedule for computing the load imbalance (which we quantify in terms of number of observations) among neighboring subdomains. Such quantity is then used to update the shifting of the boundaries of adjacent subdomains (Migration step) which are finally re mapped to achieve a balanced decomposition. We are assuming that load balancing is restricted to the neighboring domains so that we reduce the overhead processing time. Finally, following Reference 23 we use a diffusion type scheduling algorithm minimizing the Euclidean norm of data movement. The resulting constrained optimization problem leads to normal equations whose matrix is associated to the decomposition graph. The disadvantage is that the overhead time, due to the final balance among subdomains, strongly depends on the degree of the vertices of processors graph, gien by the number of neighboring subdomains of each subdomain. Such overhead represents the surface-to-volume ratio whose impact on the overall performance of the parallel algorithm decreases as the problem size increases.

1.2 Related works

There has been widespread interests in load balancing since the introduction of large-scale multiprocessors.24 Applications requiring dynamic load balancing mainly include parallel solution of a PDE by finite elements on an unstructured grids25 or parallelized particle simulations.26 Load balancing is one of the central problems which have to be solved in designing parallel algorithms. Moreover, problems whose workload changes during the computation or it depends on data layout which may be unknown a priori, will necessitate the redistribution of the data in order to retain efficiency. Such a strategy is known as dynamic load balancing. Algorithms for dynamic load balancing, as in References 27-30, are based on transferring an amount of work among processors to neighbors; the process is iterated until the load difference between any two processors is smaller than a specified value, consequently it will not provide a balanced solution immediately. A multilevel diffusion method for dynamic load balancing,31 is based on bisection of processor graph. The disadvantage is that, to ensure the connectivity of subgraphs, movement of data between nonneighboring processors can occur. The mentioned algorithms do not take into account one important factor, namely that the data movement resulting from the load balancing schedule should be kept to minimum.

1.2.1 Organization of the work

The present work is organized as follows. As we apply the proposed framework to CLS model which can be seen as prototype of variational DA models, in order to improve the readability of the article, in Section 2 we give a brief overview of DA methods, that is, both KF and Variational DA, the variational formulation of KF and finally we give a brief description of CLS model. In Section 3, we describe main features of DD-DA framework and its application to CLS model. DyDD is presented in Section 4, through a graphical description and the numerical algorithm. Validation and performance results are presented in Section 5 and, finally, in Section 6 we give conclusions and future works.

2 THE BACKGROUND

In order to improve the readability of the article, in this section we give a brief overview of DA methods, that is, both KF and Variational DA, then we review CLS model as prototype of DA models. To this end, we also review the variational formulation of KF, that is, the so-called VAR–KF formulation, obtained minimizing the sum of the weighted Euclidean norm of the model error and the weighted Euclidean norm of the observation error.

2.1 Kalman filter

Given x 0 n , let x ( t ) n , ∀t ∈ [0, T], denote the state of a dynamic system governed by the mathematical model t , t + Δ t [ x ( t ) ] , Δ t > 0 :
x ( t + Δ t ) = t , t + Δ t ( x ( t ) ) , t , t + Δ t [ 0 , T ] x ( 0 ) = x 0 , ()
and let:
y ( t + Δ t ) = t + Δ t [ x ( t + Δ t ) ] , ()
denote the observations where t + Δ t is the observations mapping. Chosen r , we consider r + 2 points in [0, T] and Δ t = T r + 1 .
Let {tk}k = 0, 1, … , r + 1 be a discretization of [0, T], where t k = k Δ t , and let x ^ k be the state estimate at time tk, for k = 1, … , r + 1; we will use the following operators:9, for k = 0, 1, … , r, M k , k + 1 n × n , denoting the discretization of a linear approximation of t k , t k + 1 and for k = 0, 1, … , r + 1, H k m × n which is the discretization of a linear approximation of t with m > n. Moreover, we let w k n and v k m be model and observation errors with normal distribution and zero mean such that E [ w k v i T ] = 0 , for i, k = 0, 1, … , r + 1, where E[·] denotes the expected value; Q k n × n and R k m × m , are covariance matrices of the errors on the model and on the observations, respectively, that is,
Q k : = E [ w k w k T ] R k : = E [ v k v k T ] k = 0 , 1 , , r + 1 .
These matrices are symmetric and positive definite.
KF method: KF method consists in calculating the estimate x ^ k + 1 , at time tk + 1, of the state x k + 1 n :
x k + 1 = M k , k + 1 x k + w k , k = 0 , 1 , , r ()
such that
y k + 1 = H k + 1 x k + 1 + v k + 1 , k = 0 , 1 , , r . ()
KF algorithm: Given x ^ 0 n and P 0 = O n × n a null matrix, for each k = 0, 1, … , r KF algorithm is made by two main operations: the Predicted phase, consisting of the computation of the predicted state estimate:
x k + 1 = M k , k + 1 x ^ k ; ()
and of the predicted error covariance matrix:
P k + 1 = M k , k + 1 P k M k , k + 1 T + Q k ; ()
and the Corrector phase, consisting of the computation of Kalman gain:
K k + 1 = P k + 1 H k + 1 T ( H k + 1 P k + 1 H k + 1 T + R k + 1 ) 1 , ()
of Kalman covariance matrix:
P k + 1 = ( I K k + 1 H k + 1 ) P k + 1 ,
and of Kalman state estimate:
x ^ k + 1 = x k + 1 + K k + 1 ( y k + 1 H k + 1 x k + 1 ) . ()
Finally, we introduce the VAR-KF model. For k = 0, 1, … , r:
x ^ k + 1 = argmin x k + 1 n J k + 1 ( x k + 1 ) = argmin x k + 1 n | | x k + 1 M k , k + 1 x ^ k | | Q k 2 + | | y k + 1 H k + 1 x k + 1 | | R k + 1 2 .

3 VAR DA MODEL SET UP

If Ω n , n , is a spatial domain with a Lipschitz boundary, let:
u ( t + h , x ) = t , t + h [ u ( t , x ) ] x Ω , t , t + h [ 0 , T ] , ( h > 0 ) u ( t 0 , x ) = u 0 ( x ) t 0 0 , x Ω u ( t , x ) = f ( x ) x Ω , t [ 0 , T ] , ()
be a symbolic description of the 4D–DA model of interest where
u : ( t , x ) [ 0 , T ] × Ω u ( t , x ) = [ u [ 1 ] ( t , x ) , u [ 2 ] ( t , x ) , , u [ p v ] ( t , x ) ] ,
is the state function of with p v the number of physical variables, f is a known function defined on the boundary Ω , and let
v : ( t , x ) [ 0 , T ] × Ω v ( t , x ) ,
be the observations function, and
: u ( t , x ) v ( t , x ) , ( t , x ) [ 0 , T ] × Ω ,
denote the nonlinear observations mapping. To simplify future treatments we assume pv ≡ 1. We consider Np points of Ω n : { x j } j = 1 , , N p Ω ; nobs points of Ω , where nobs ≪ Np, : { y j } j = 1 , , n obs ; N points of [0,T] : D([0, T]) = {tl}l = 0, 1, … , N − 1 with tl = t0 + l(hT); the vector
u 0 = { u 0 , j } j = 1 , , N p { u 0 ( x j ) } j = 1 , , N p N p ,
which is the state at time t0; the operator
M l 1 , l N p × N p , l = 1 , , N ,
representing a discretization of a linear approximation of t l 1 , t l from tl − 1 to tl; the vector b N p accounting boundary conditions; the vector
u b : = { u l , j b } l = 1 , , N 1 ; j = 1 , , N p { u b ( t l , x j ) } l = 1 , , N 1 ; j = 1 , , N p N p · ( N 1 ) ,
representing solution of Ml − 1, l at tl for l = 1, … , N, that is, the background; the vector
v l { v ( t l , y j ) } j = 1 , , n obs l · n obs ,
consisting of observations at tl, for l = 0, … , N − 1; the linear operator
H l n obs × N p , l = 0 , , N 1 ,
representing a linear approximation of ; matrix G G N 1 ( N · n obs ) × N p such that
G l = H 0 H 1 H l 1 l > 1 H 0 l = 1 ,
and R = diag(R0, R1, … , RN − 1) and Q=VVT, covariance matrices of the errors on observations and background, respectively. We now define the 4D–DA inverse problem.32

Definition 1.(The 4D–DA inverse problem).33 Given the vectors v = { v l } l = 0 , , N 1 N · n obs , u 0 N p , and the block matrix G ( N · n obs ) × N p , a 4D–DA problem concerns the computation of

u DA N p ,
such that
v = G · u DA , ()
subject to the constraint that u 0 DA = u 0 .

We also introduce the regularized 4D–DA inverse problem, that is, the 4D–VAR DA problem.

Definition 2.(The 4D–VAR DA problem). The 4D–VAR DA problem concerns the computation of:

u DA = argmin u N p J ( u ) , ()
with
J ( u ) = α | | u u b | | Q 1 2 + | | G u v | | R 1 2 , ()
where α is the regularization parameter.

Remark: It is worth noting that here we are considering a linear approximation of the observation operator, hence a linear operator G, although this is not at all required, at least in the formulation of the 4D–VAR problem. A more general approach for numerically linearize and solve 4D–VAR DA problem consists in defining a sequence of local approximations of J where each member of the sequence is minimized by employing Newton's method or one its variants.34, 35 More precisely, two approaches could be employed:
  • (a)

    by truncating Taylor's series expansion of J at the second order, giving a quadratic approximation of J, let us say JQN. Newton'methods (including LBFGS and Levenberg–Marquardt) use JQD. The minimum is computed solving the linear system involving the Hessian matrix ∇2J, and the negative gradient − ∇ J.

  • (b)

    by truncating Taylor's series expansion of J at the first order which gives a linear approximation of J, let us say let us say JTL. Gauss–Newton's methods (including Truncated or Approximated Gauss–Newton uses JTL). The minimum is computed solving the normal equations arising from the local Linear Least Squares problem.

Both approaches will employ the tangent linear model and the adjoint operator of the observation mapping and of the model of interest.36

Remark: Computational kernel of variational approaches (namely, 3D-VAR and 4D-VAR) is a linear system, generally solved by means of iterative methods; the iteration matrix is related to matrix Q, which usually has a Gaussian correlation structure.6 Matrix Q can be written in the form Q=VVT, where V is the square root of Q, namely, it is a Gaussian matrix. As a consequence, products of V and a vector z are Gaussian convolutions which can be efficiently computed by applying Gaussian recursive filters as in Reference 37.

In our case study we carry out on CLS model, we apply KF and DD-KF to CLS model, then in this case it results that matrix Q is the null matrix while matrix R is diagonal.14

3.1 CLS problem

Let
H 0 x 0 = y 0 , H 0 m 0 × n , y 0 m 0 , x 0 n ()
be the overdetermined linear system (the state), where rank(H0) = n > 0, m0 > n.
Given H 1 m 1 × n , y 1 m 1 , x 1 n , x n (observations), we consider the system
S : A x = b ()
where
A = H 0 H 1 ( m 0 + m 1 ) × n , b = y 0 y 1 m 0 + m 1 , ()
and m1 > 0. Let R 0 m 0 × m 0 , R 1 m 1 × m 1 be weight matrices and R = diag ( R 0 , R 1 ) ( m 0 + m 1 ) × ( m 0 + m 1 ) .
CLS problem consists in the computation of x ^ such that:
CLS : x ^ = argmin x n J ( x ) ()
with
J ( x ) = | | A x b | | R 2 = | | H 0 x y 0 | | R 0 2 + | | H 1 x y 1 | | R 1 2 , ()
where x ^ is given by
( A T R A ) x ^ = A T R b x ^ = ( A T R A ) 1 A T R b ()
or,
x ^ = ( H 0 T R 0 H 0 + H 1 T R 1 H 1 ) 1 ( H 0 T R 0 y 0 + H 1 T R 1 y 1 ) . ()

We refer to x ^ as solution in least squares sense of system in (14).

Remark: Besides covariance matrices of errors, main components of KF algorithm are dynamic model and observation mapping. These are two main components of any variational DA operator and state estimation problem, too. In this regard, in the following, as proof of concept of DD-DA framework, we start considering CLS model as a prototype of a variational DA model, at a given time. CLS is obtained combining two overdetermined linear systems, representing the state and the observation mapping, respectively. Then, we introduce VAR-KF method as reference data sampling method solving CLS model problem. VAR-KF will be decomposed by using the proposed framework. That said, any interest reader who wants to apply DD-DA framework in a real-world application, that is, with a (PDE-based) model state and an observation mapping, once the dynamic (PDE-based) model has been discretized, he should rewrite the state estimation problem under consideration as a CLS model problem (cfr Section 3.1) and then to apply CLS algorithm. In other words, she/he should follow the discretize-then-optimize approach, common to most DA problems and state estimation problems, before employing DD-DA and DyDD framework.38

4 DD-FRAMEWORK

As DyDD is the refinement of initial DD-DA, in the following we first give mathematical settings useful to define the domain decomposition framework. Then, the following section we focus on DyDD.

4.1 DD set up

Definition 3. (Matrix Reduction)Let B = [ B 1 B 2 B n ] m × n be a matrix with m, n ≥ 1 and Bj the jth column of B and Ij = {1, … , j} and Ii, j = {i, … , j} for i = 1, … , n − 1; j = 2, … , n, and i < j for every (i, j). Reduction of B to Ij is:

| I j : B m × n B | I j = [ B 1 B 2 B j ] m × j , j = 2 , , n ,
and to Ii, j
| I i , j : B m × n B | I i , j = [ B i B i + 1 B j ] m × j i , i = 1 , , n 1 , j > i ,
where B | I j and B | I i , j denote reduction of B to Ij and Ii, j, respectively.

Definition 4. (Vector Reduction)Let w = [ w t w t + 1 w n ] T s be a vector with t ≥ 1, n > 0, s = n − t and I1, r = {1, … , r}, r > n and n > t. The extension of w to Ir is:

EO I r : w s EO I r ( w ) = [ w 1 w 2 w r ] T r ,
where i = 1, … , r
w i = w i i f t i n 0 i f i > n a n d i < t .

We introduce reduction of J, as given in (17).

Definition 5. (Model Reduction)Let us consider A ( m 0 + m 1 ) × n , b m 0 + m 1 , the matrix and the vector defined in (15), I1 = {1, … , n1}, I2 = {1, … , n2} with n1, n2 > 0 and the vectors x n . Let

J | ( I i , I j ) : ( x | I i , x | I j ) J | ( I i , I j ) ( x | I i , x | I j ) i , j = 1 , 2
denote the reduction of J defined in (17). It is defined as
J | ( I i , I j ) ( x | I i , x | I j ) = | | H 0 | I i x | I i ( y 0 + H 0 | I j x | I j ) | | R 0 2 + | | H 1 | I i x | I i ( y 1 + H 1 | I j x | I j ) | | R 1 2 , ()
for i, j = 1, 2.

For simplicity of notations we let J i , j J | ( I i , I j ) .

4.2 DD-CLS problems: DD of CLS model

We apply DD approach for solving system S in (14). Here, for simplicity of notations, we consider two subdomains.

Definition 6. (DD-CLS model[12])Let S be the overdetermined linear system in (14) and A ( m 0 + m 1 ) × n , b m 0 + m 1 the matrix and the vector defined in (15) and R 0 m 0 × m 0 , R 1 m 1 × m 1 , R = diag ( R 0 , R 1 ) ( m 0 + m 1 ) × ( m 0 + m 1 ) be the weight matrices with m0 > n and m1 > 0. Let us consider the index set of columns of A, I = {1, … , n}. DD-CLS model consists of:

  • DD step. It consists of DD of I:
    I 1 = { 1 , , n 1 } , I 2 = { n 1 s + 1 , , n } , ()
    where s ≥ 0 is the number of indexes in common, |I1| = n1 > 0, |I2| = n2 > 0, and the overlap sets
    I 1 , 2 = { n 1 s + 1 , , n 1 } , ()
    If s = 0, then I is decomposed without using the overlap, that is, I1 ∩ I2 =  and I1, 2 ≠ , instead if s > 0, that is, I is decomposed using overlap, that is, I1 ∩ I2 ≠  and I1, 2 = ; restrictions of A to I1 and I2 defined in (21)
    A 1 = A | I 1 ( m 0 + m 1 ) × n 1 , A 2 = A | I 2 ( m 0 + m 1 ) × n 2 , ()
  • DD-CLS step: given x 2 0 n 2 , according to the alternating Schwarz method in Reference 11, DD-CLS approach consists in solving for n = 0, 1, 2, … the following overdetermined linear systems:
    S 1 n + 1 : A 1 x 1 n + 1 = b A 2 x 2 n ; S 2 n + 1 : A 2 x 2 n + 1 = b A 1 x 1 n + 1 , ()
    by employing a regularized VAR-KF model. It means that DD-CLS consists of a sequence of two subproblems:
    P 1 n + 1 : x ^ 1 n + 1 = argmin x 1 n + 1 n 1 J 1 ( x 1 n + 1 , x 2 n ) = argmin x 1 n + 1 n 1 J | ( I 1 , I 2 ) ( x 1 n + 1 , x 2 n ) + μ · 𝒪 1 , 2 ( x 1 n + 1 , x 2 n ) ()
    P 2 n + 1 : x ^ 2 n + 1 = argmin x 2 n + 1 n 2 J 2 ( x 2 n + 1 , x 1 n + 1 ) = argmin x 2 n + 1 n 2 J | ( I 2 , I 1 ) ( x 2 n + 1 , x 1 n + 1 ) + μ · 𝒪 1 , 2 ( x 2 n + 1 , x 1 n + 1 ) ()
    where Ii is defined in (21) and J | I i , I j is defined in (20), 𝒪 1 , 2 is the overlapping operator and μ > 0 is the regularization parameter.

Remark 1.If I is decomposed without using overlap (i.e., s = 0), then x ^ 1 n + 1 n 1 and x ^ 2 n + 1 n 2 can be written in terms of normal equations as follows

S ˜ 1 n + 1 : ( A 1 T RA 1 ) x ^ 1 n + 1 = A 1 T R ( b A 2 x 2 n ) x ^ 1 n + 1 = ( A 1 T RA 1 ) 1 A 1 T R b 1 n S ˜ 2 n + 1 : ( A 2 T RA 2 ) x ^ 2 n + 1 = A 2 T R ( b A 1 x 1 n + 1 ) x ^ 2 n + 1 = ( A 2 T RA 2 ) 1 A 2 T R b 2 n + 1 , ()
where b 1 n = b A 2 x 2 n and b 2 n + 1 = b A 1 x 1 n + 1 .

Remark 2.Regarding the operator 𝒪 1 , 2 , we consider x 1 n 1 and x 2 n 2 , and we pose

𝒪 1 , 2 ( x i , x j ) = | | EO I i ( x i | I 1 , 2 ) EO I i ( x j | I 1 , 2 ) | | , i , j = 1 , 2
with EO I i ( x 1 | I 1 , 2 ) , EO I i ( x 2 | I 1 , 2 ) be the extension to Ii, of restriction to I1, 2 in (22) of x 1 n 1 and x 2 n 2 , respectively. Operator 𝒪 1 , 2 represents the exchange of data on the overlap I1, 2 in (22).

Remark 3.DD-CLS gives to sequences { x n + 1 } n 0 :

x n + 1 = x ^ 1 n + 1 | I 1 I 1 , 2 o n I 1 I 1 , 2 μ 2 ( x ^ 2 n + 1 | I 1 , 2 + x ^ 1 n + 1 | I 1 , 2 ) o n I 1 , 2 x ^ 2 n + 1 | I 2 I 1 , 2 o n I 2 I 1 , 2 , ()
where I1, I2 are defined in (21) and I1, 2 in (22).

Remark 4.For DD-CLS model we considered, DD of I = { 1 , , n } , that is, the index set of columns of m A, similarly we can apply DD approach to 2D domain I × J × , where J = {1, … , (m0 + m1)} is the rows index set of A. Subdomains obtained are I1 × J1 = {1, … , n1} × {1, … , m1} and I2 × J2 = {n1 − sI + 1, … , n} × {m1 − sJ + 1, … , (m0 + m1)}, where sI, sJ ≥ 0 are the number of indexes in common between I1 and I2, J1 and J2, respectively. Restrictions of A to I1 × J1 and I2 × J2 are A 1 : = A | I 1 × J 1 and A 2 : = A | I 2 × J 2 .

Remark 5.The cardinality of J, that is, the index set of rows of matrix A, represents the number of observations available at time of the analysis, so that DD of I × J allows us to define DD-CLS model after dynamic load balancing of observations by appropriately restricting matrix A.

5 DyDD: DYNAMIC DD-DA FRAMEWORK

For effective parallelization of DD-DA, domain partitioning into subdomains must satisfy certain conditions. First the computational load assigned to subdomains must be equally distributed. Usually, computational cost is proportional to the amount of data entities assigned to partitions. Good quality partitioning also requires the volume of communication during calculation to be kept at its minimum.39 We employ a dynamic load balancing scheme based on adaptive and dynamic redefining of initial DD-DA aimed to balance workload between processors. Redefining of initial partitioning is performed by shifting the boundaries of neighboring domains (this step is referred to as Migration step).

DyDD algorithm we implement is described by procedure DyDD shown in Table 13. To the aim of giving a clear and immediate view of DyDD algorithm, in the following figures (Figures 1-4) we outline algorithm workout on a reference initial DD configuration made of eight subdomains. We assume that at each point of the mesh we have the value of numerical simulation result (the so-called background) while the circles denote observations. DyDD framework consists in four steps:
  1. DD step: starting from the initial partition of Ω provided by DD-DA framework, DyDD performs a check of the initial partitioning. If a subdomain is empty, it decomposes subdomain adjacent to that domain which has maximum load (decomposition is performed in two subdomains). See Figure 1.
  2. Scheduling step: DyDD computes the amount of observations needed for achieving the average load in each subdomain; this is performed by introducing a diffusion type algorithm (by using the connected graph G associated to the DD) derived by minimizing the Euclidean norm of the cost transfer. Solution of the Laplacian system associated to the graph G gives the amount of data to migrate. See Figure 2.
  3. Migration step: DyDD shifts the boundaries of adjacent subdomains to achieve a balanced workload. See Figure 3.
  4. Update step: DyDD redefines subdomains such that each one contains the number of observations computed during the scheduling step and it redistributes subdomains among processors grids. See Figure 4.
Details are in the caption following the image
DyDD framework—Step 1. Check of the initial partitioning, identification of subdomains which do not have data or they suffer of any load imbalance and redefinition of subdomains. We observe that the workload of each subdomain after this repartitioning is now lr(1) = 5, lr(2) = 4, lr(3) = 6, lr(4) = 2, lr(5) = 5, lr(6) = 3, lr(7) = 5, and lr(8) = 2. The average load is then l = 4
Details are in the caption following the image
DyDD framework—Step 2. Scheduling. On the right, the graph G associated to the DD of Ω . In brackets the number lr(i) is displayed
Details are in the caption following the image
DyDD framework—Step 3. Migration. Redefinition of the boundaries of adjacent subdomains
Details are in the caption following the image
DyDD framework—Step 4. Update step. Updating of the processor graph. In brackets, the number of observations lfi(i) after DyDD is displayed. We observe that the workload of each subdomain after DyDD is equal to the average load l = 4
Scheduling step is the computational kernel of DyDD algorithm. In particular, it requires definition of Laplacian matrix and load imbalance associated to initial DD-DA and its solution. Let us give a brief overview of this computation. Generic element Lij of Laplacian matrix is defined as follows:19
L i j = 1 i j and edge ( i , j ) G deg ( i ) i = j , 0 otherwise ()
and the load imbalance b = l i l , where d i is the degree of vertex i, l i and l are the number of observations and the average workload, respectively. Hence, as more edges are in G (as the number of subdomains which are adjacent to each other increases) as more nonzero elements are in L.
Laplacian system L λ = b , related to the example described below, is the following:
L = 2 1 1 0 0 0 0 0 1 3 1 1 0 0 0 0 1 1 4 1 1 0 0 0 0 1 1 2 0 0 0 0 0 0 1 0 2 1 0 0 0 0 0 0 1 3 1 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 1 2 ()
while the right-hand side is the vector whose ith component is given by the load imbalance, computed with respect to the average load. In this example, solution of the Laplacian system gives
λ = ( 0 . 36 , 0 . 25 , 0 . , 1 . 12 , 1 . , 5 . , 6 . 33 , 6 . 67 )
so that the amount of load (rounded to the nearest integer) which should be migrated from Ω i to Ω j is
δ 1 , 2 = 1 ; δ 1 , 3 = 0 ; δ 3 , 2 = 0 ; δ 3 , 4 = 1 ; δ 3 , 5 = 1 ; δ 5 , 6 = 2 ; δ 6 , 7 = 0 ; δ 6 , 8 = 1 ; δ 7 , 8 = 1 .
that is, δ i , j is the nearest integer of ( λ i λ j ) .

6 VALIDATION RESULTS

Simulations were aimed to validate the proposed approach by measuring performance of DyDD algorithm as shown in Table 13. Performance evaluation was carried out using Parallel Computing Toolbox of MATLABR2013a on the high-performance hybrid computing (HPC) architecture of the Sistema Cooperativo Per Elaborazioni scientifiche multidiscipliari data center, located at University of Naples Federico II. More precisely, the HPC architecture is made of 8 nodes, consisting of distributed memory DELL M600 blades connected by a 10 Gigabit Ethernet technology. Each blade consists of 2 Intel [email protected] GHz quadcore processors sharing 16 GB RAM memory for a total of 8 cores/blade and of 64 cores, in total. In this case for testing the algorithm we consider up to nsub = 64 subdomains equally distributed among the cores. This is an intranode configuration implementing a coarse-grained parallelization strategy on multiprocessor systems with many-core CPUs.

DyDD set up. We will refer to the following quantities: Ω 2 : spatial domain; n = 2048: mesh size; m: number of observations; p: number of subdomains and processing units; i: identification number of processing unit, which is the same of the associated subdomain; for i = 1, … , p, deg(i): degree of i, that is, number of subdomains adjacent to Ω i ; i ad ( i ) : identification of subdomains adjacent to Ω i ; l in ( i ) : number of observations in Ω i before the dynamic load balancing; l r ( i ) : number of observations in Ω i after DD step of DyDD procedure; l f i ( i ) : number of observations in Ω i after the dynamic load balancing; T DyDD p ( m ) : time (in seconds) needed to perform DyDD on p processing units; Tr(m): time (in seconds) needed to perform repartitioning of Ω ; O h DyDD ( m ) = T r ( m ) T DyDD p ( m ) overhead time to the dynamic load balancing.

As measure of the load balance introduced by DyDD algorithm, we use:
= min i ( l f i ( i ) ) max i ( l f i ( i ) )
that is, we compute the ratio of the minimum to the maximum of the number of observations of subdomains Ω 1 , , Ω p after DyDD, respectively. As a consequence, = 1 indicates a perfectly balanced system.

Regarding DD-DA, we let n loc : = n p be local problem size and we consider as performance metrics, the following quantities: T 1 m , n denoting sequential time (in seconds) to perform KF solving CLS problem; T DD-DA p m , n loc denoting time (in seconds) needed to perform in parallel DD-KF solving CLS problem after DyDD; T o h p m , n loc being the overhead time (measured in seconds) due to synchronization, memory accesses, and communication time among p cores; x ^ KF n denoting KF estimate obtained by applying the KF procedure on CLS problem after DyDD; x ^ DD-DA n denoting estimate obtained by applying DD-KF on CLS problem after DyDD; error DD-DA : = x ^ KF x ^ DD-DA denoting the error introduced by the DD-DA framework; S p m , n loc : = T 1 m , n T DD-DA p ( m , n loc ) , which refers to the speed-up of DD-DA parallel algorithm; E p m , n loc : = S p m , n loc p which denotes the efficiency of DD-DA parallel algorithm.

In the following tables we report results obtained by employing three scenarios, which are defined such that each one is gradually more articulated than the previous one. It means that the number of subdomains which are adjacent to each subdomain increases, or the number of observations and the number of subdomains increase. In this way the workload re distribution increases.

Example 1.First configuration: p = 2 subdomains and m = 1500 observations. In Case1, both Ω 1 and Ω 2 have data, that is, observations, but they are unbalanced. In Case2, Ω 1 has observations and Ω 2 is empty. In Tables 1 and 2, respectively, we report values of the parameters after applying DyDD algorithm. This is the simplest configuration we consider just to validate DyDD framework. In both cases, lfi(1) and lfi(2), that is, number of observations of Ω 1 and Ω 2 , are equal to the average load l = 750 and = 1. As the workload re distribution of Case 1 and Case 2 is the same, DD-DA performance results of Case 1 and Case 2 are the same, and they are reported in Table 9, for p = 2 only. In Table 3, we report performance results of DyDD algorithm.

TABLE 1. Example 1. DyDD parameters in Case 1
p i deg(i) lin lfin iad
2 1 1 1000 750 2
2 1 500 750 1
  • Note: Both subdomains have data but they are unbalanced. We report values of p, which is the number of subdomains, i the identification number of processing unit, deg(i) degree of i, that is, number of subdomains adjacent to Ω i , lin(i) which is number of observations in Ω i before dynamic load balancing, lfi(i) number of observations in Ω i after dynamic load balancing, iad identification of subdomains adjacent to Ω i .
TABLE 2. Example 1. DyDD parameters in Case 2
p i deg(i) lin lr lfin iad
2 1 1 1500 1000 750 2
2 1 0 500 750 1
  • Note: Ω 2 is empty. We report values of p, that is, number of subdomains, i identification number of processing unit, deg(i) degree of i, that is, number of subdomains adjacent to Ω i , lin(i) which is number of observations in Ω i before dynamic load balancing, lr(i) number of observations in Ω i after DD step of DyDD procedure, lfi(i) number of observations in Ω i after dynamic load balancing, iad which is identification of subdomains which are adjacent to Ω i .
TABLE 3. Example 1. Execution times: We report values of T DyDD p ( m ) , time (in seconds) needed to perform DyDD on p processing units, Tr(m), time (in seconds) needed to perform repartitioning of Ω , OhDyDD(m) overhead time due to dynamic load balancing and measuring load balance
Case T DyDD p ( m ) Tr(m) OhDyDD(m)
1 4.11 × 10−2 0 0 1
2 3.49 × 10−2 4.00 × 10−6 1.15 × 10−4 1

Example 2.Second configuration. In this experiment we consider p = 4 subdomains and m = 1500 observations, and four cases which are such that the number of subdomains not having observations, increases from 0 up to 3. In particular, in Case 1, all subdomains have observations. See Table 4. In Case 2, only one subdomain is empty, namely, Ω 2 . See Table 5. In Case 3, two subdomains are empty, namely, Ω 1 and Ω 2 are empty. See Table 6. In Case 4, three subdomains are empty, namely, Ω j , for j = 1, 2, 3, is empty. See Table 7. In all cases, reaches the ideal value 1 and l fin ( i ) = l = 375 , i = 1, 2, 3, 4. Then, DD-DA performance results of all cases are the same and they are reported in Table 9 for p = 4. In Table 8, we report performance results of the four cases.

TABLE 4. Example 2. DyDD parameters in Case 1
p i deg(i) lin lfin iad
4 1 2 150 375 [ 2 4 ]
2 2 300 375 [ 3 1 ]
3 2 450 375 [ 4 2 ]
4 2 600 375 [ 3 1 ]
  • Note: All subdomains have data. We report values of p, which is the number of subdomains, i identification number of processing unit, deg(i) degree of i, that is, number of subdomains adjacent to Ω i , lin(i) the number of observations in Ω i before dynamic load balancing, lfi(i) the number of observations in Ω i after dynamic load balancing, iad identification of subdomains which are adjacent to Ω i .
TABLE 5. Example 2. DyDD parameters in Case 2
p i deg(i) lin lr lfin iad
4 1 2 450 450 375 [ 2 4 ]
2 2 0 225 375 [ 3 1 ]
3 2 450 225 375 [ 4 2 ]
4 2 600 600 375 [ 3 1 ]
  • Note: Ω 2 is empty. We report values of p, which is number of subdomains, i, that is, identification number of processing unit, deg(i), that is, degree of i, that is, number of subdomains which are adjacent to Ω i , lin(i), that is, number of observations in Ω i before dynamic load balancing, lfi(i) number of observations in Ω i after dynamic load balancing, iad identification of subdomains adjacent to Ω i .
TABLE 6. Example 2. DyDD parameters in Case 3
p i deg(i) lin lr lfin iad
4 1 2 0 300 375 [ 2 4 ]
2 2 0 450 375 [ 3 1 ]
3 2 900 450 375 [ 4 2 ]
4 2 300 600 375 [ 3 1 ]
  • Note: Ω 1 and Ω 2 are empty. We report values of p, which is the number of subdomains, i identification number of processing unit, deg(i), that is, degree of i, that is, number of subdomains adjacent to Ω i , lin(i) number of observations in Ω i before dynamic load balancing, lfi(i) number of observations in Ω i after dynamic load balancing, iad identification of subdomains which are adjacent to Ω i .
TABLE 7. Example 2. DyDD parameters in Case 4
p i deg(i) lin lr lfin iad
4 1 2 0 500 375 [ 2 4 ]
2 2 0 250 375 [ 3 1 ]
3 2 0 250 375 [ 4 2 ]
4 2 1500 500 375 [ 3 1 ]
  • Note: Ω 1 , Ω 2 , and Ω 3 are empty. We report values of p, that is, number of subdomains, i identification number of processing unit, deg(i) degree of i, that is, number of subdomains which are adjacent to Ω i , lin(i) the number of observations in Ω i before dynamic load balancing, lfi(i) number of observations in Ω i after dynamic load balancing and iad identification of subdomains which are adjacent to Ω i .
TABLE 8. Example 2. Execution times: We report values of T DyDD p ( m ) , that is, time (in seconds) needed to perform DyDD algorithm on p processing units, Tr(m) time (in seconds) needed to perform repartitioning of Ω , OhDyDD(m) overhead time to the dynamic load balancing and parameter of load balance
Case T DyDD p ( m ) Tr(m) OhDyDD(m)
1 5.40 × 10−2 0 0 1
2 5.84 × 10−2 2.35 × 10−4 0.4 · 10−2 1
3 4.98 × 10−2 3.92 × 10−4 0.8 · 10−2 1
4 4.63 × 10−2 5.78 × 10−4 0.1 · 10−1 1
TABLE 9. Example 1–2: DD-DA performance results in Examples 1 and 2
p = 1 n = 2048 m = 1500 T1(m, n) = 5.67 × 100
p nloc T DD-DA p m , n loc S p m , n loc E p m , n loc
2 1024 4.95 × 100 1.15 × 100 5.73 × 10−1
4 512 2.48 × 100 2.29 × 100 5.72 × 10−1
  • Note: We report values of p, which is the number of subdomains, n mesh size, nloc, that is, local problem size, m number of observations, T 1 m , n sequential time (in seconds) to perform KF solving CLS problem, T DD-DA p m , n loc time (in seconds) needed to perform in parallel DD-DA solving CLS problem with DyDD, S p m , n loc and E p m , n loc the speed-up and efficiency of DD-DA parallel algorithm, respectively. We applied DyDD to all cases of Example 1 and Example 2 and obtained the perfect load balance, as reported in Tables 3 and 8, respectively. As the workload distribution is the same, DD-DA performance results are the same in all cases of Example 1, then we show results for p = 2, only. In the same way, for all cases of Example 2, we show results for p = 4, only.

Example 3.We consider m = 1032 observations and a number of subdomains p equals to p = 2, 4, 8, 16, 32. We assume that all subdomains Ω i has observations, that is, for i = 1, … , p, lin(i) ≠ 0; Ω 1 has p − 1 adjacent subdomains, that is, nad := deg(1) = p − 1; Ω i has 1 adjacent subdomain, that is, for i = 2, … , p, deg(i) = 1; finally i = 1, … , p, we let the maximum and the minimum number of observations in Ω i be such that lmax = maxi(lfin(i)) and lmin = mini(lfin(i)). Table 10 shows performance results and Figure 5 reports the error of DD-DA with respect to KF.

TABLE 10. Example 3. Execution times: We report values of p, that is, the number of subdomains, nad, the number of adjacent subdomains to Ω 1 , T DyDD p ( m ) , time (in seconds) needed to perform DyDD on p processing units, which measures load balance, lmax and lmin, that is, maximum and minimum number of observations between subdomains after DyDD, respectively
p nad T DyDD p ( m ) lmax lmin
2 1 6.20 × 10−3 516 515 9.98 × 10−1
4 3 2.60 × 10−2 258 257 9.96 × 10−1
8 7 9.29 × 10−2 129 128 9.92 × 10−1
16 15 1.11 × 10−1 71 64 8.88 × 10−1
32 31 1.36 × 10−1 39 32 8.21 × 10−1
  • Note: depends on nad(≡ deg(1)), that is, as nad(≡ deg(1)) increases (consequently p increases), then decreases. For i = 1, … , p, subdomain Ω i has observations, that is, lin(i) ≠ 0, consequently we do not need to perform repartitioning of Ω , then Tr(m) ≡ 0.
Details are in the caption following the image
Examples 3 (left)- 4 (right). We report values of errorDD-DA versus p

Example 4.We consider m = 2000 observations and p = 2, 4, 8, 16, 32 we assume that Ω i has observations, that is, for i = 1, … , p, lin(i) ≠ 0; Ω 1 and Ω p have 1 adjacent subdomain, that is, deg(1) = deg(p) = 1; Ω i and Ω p have 2 adjacent subdomains, that is, for i = 2, … , p − 1, deg(i) = 2. In Table 12, we report performance results and in Figure 5 the error of DD-DA with respect to KF is shown.

Finally, regarding the accuracy of the DD-DA framework with respect to computed solution, in Table 11 (Examples 1-2) and in Figure 5 (Examples 3-4), we get values of errorDD-DA. We observe that the order of magnitude is about 10−11 consequently, we may say that the accuracy of local solutions of DD-DA and hence of local KF estimates, are not impaired by DD approach.

TABLE 11. Examples 1–2. We report values of errorDD-DA, that is, the error introduced by the DyDD framework, in Example 1 (with p = 2) and Example 2 (with p = 4)
p errorDD-DA
2 8.16 × 10−11
4 8.82 × 10−11

From these experiments, we observe that as the number of adjacent subdomains increases, data communications required by the workload repartitioning among subdomains increases too. Accordingly, the overhead due to the load balancing increases (for instance, see Table 8). As expected, the impact of such overhead on the performance of the whole algorithm strongly depends on the problem size and the number of available computing elements. Indeed, in Case 1 of Example 1 and of Example 2, when p is small in relation to nloc (see Table 9) this aspect is quite evident. In Example 4, instead, as p increases up to 32, and nloc decreases the overhead does not affect performance results (see Table 12). In conclusion, we recognize a sort of trade-off between the overhead due to the workload repartitioning and the subsequent parallel computation (Table 13).

TABLE 12. Example 4. Performance results of DyDD framework: We report values of p, which is the number of subdomains, n, that is, the mesh size, nloc, that is, the local problem size, m the number of observations, T 1 m , n , that is, sequential time (in seconds) needed to perform KF, T DyDD p ( m ) , that is, time (in seconds) needed to perform DyDD on p processing units, T DD-DA p m , n loc , that is, time (in seconds) needed to perform in parallel DD-DA with DyDD, S p m , n loc and E p m , n loc , that is, speed-up and efficiency of DD-DA parallel algorithm, respectively
p n = 2048 m = 2000 T1(m, n) = 4.88 × 100
p nloc T DyDD p ( m ) T DD-DA p m , n loc S p m , n loc E p m , n loc
2 1024 4.10 × 10−3 4.71 × 100 1.04 × 100 5.18 × 10−1
4 512 4.29 × 10−2 2.61 × 100 1.87 × 100 4.67 × 10−1
8 256 1.07 × 10−1 8.43 × 10−1 5.79 × 100 6.72 × 10−1
16 128 1.42 × 10−1 3.46 × 10−1 1.41 × 101 8.81 × 10−1
32 64 3.49 × 10−1 1.66 × 10−1 2.94 × 101 9.19 × 10−1
TABLE 13. Procedure DyDD
Procedure DyDD-Dynamic Load Balancing(in: p, Ω , out: l1,…,lp)
%Procedure DyDD allows to balance observations between adjacent subdomains
% Domain Ω is decomposed in p subdomains and some of them may be empty.
% DBL procedure is composed by: DD step, Scheduling step and Migration Step.
% DD step partitions Ω in subdomains and if some subdomains have not any observations, partitions adjacent subdomains with maximum load
%in two subdomains and redefines the subdomains.
% Scheduling step computes the amount of observations needed for shifting boundaries of neighboring subdomains
%Migration step decides which subdomains should be reconfigured to achieve a balanced load.
% Finally, the Update step redefines the DD.
DD step
% DD step partitions Ω in ( Ω 1 , Ω 2 , , Ω p )
Define ni, the number of adjacent subdomains of Ω i
Define li: the amount of observations in Ω i
repeat
% identification of Ω m , the adjacent subdomain of Ω i with the maximum load
Compute l m = max j = 1 , , n i ( l j ) : the maximum amount of observations
Decompose Ω m in two subdomains: Ω m ( Ω m 1 , Ω m 2 )
until (li ≠ 0)
end of DD Step
Begin Scheduling step
Define G: the graph associated with initial partition: vertex i corresponds to Ω i
Distribute the amount of observations li on Ω i
Define deg(i) = ni, the degree of node i of G:
repeat
Compute the average load: l = i = 1 p l i p
Compute load imbalance: b = ( l i l ) i = 1 , , p
Compute L, Laplacian matrix of G
Call solve(in:L, b, out: λ i ) % algorithm solving the linear system L λ i = b
Compute δ i , j , the load increment between the adjacent subdomains Ω i and Ω j . δ i , j is the nearest integer of ( λ i λ j )
Define n s i , n r i , number of those subdomains whose configuration has to be updated
Update graph G
Update amount of observations of Ω i : l i = l i j = 1 n s i δ i , j + j = 1 n r i δ j , i
until ( max l i l = = deg ( i ) 2 ) %, that is, maximum load-difference is deg(i)/2
end Scheduling step
Begin Migration Step
Shift boundaries of two adjacent subdomains in order to achieve a balanced load.
end Migration Step
Update DD of Ω
end Procedure DyDD

7 CONCLUSIONS

For effective DD-DA parallelization, partitioning into subdomains must satisfy certain conditions. First the computational load assigned to subdomains must be equally distributed. Usually the computational cost is proportional to the amount of data entities assigned to partitions. Good quality partitioning also requires the volume of communication during calculation to be kept at its minimum. In the present work we employed a dynamic load balancing scheme based on an adaptive and dynamic redefining of initial DD-DA aimed to balance load between processors according to data location. We call it DyDD. A mechanism for dynamically balancing the loads encountered in particular DD configurations has been included in the parallel framework we implement for solving large-scale DA models. In particular, we focused on the introduction of a dynamic redefining of initial DD-DA in order to deal with problems where the observations are nonuniformly distributed and general sparse. This is a quite common issue in DA. We presented first results obtained by applying DyDD in space of CLS problems using different scenarios of the initial DD-DA. Performance results confirm the effectiveness of the algorithm. We observed that the impact of data communications required by the workload repartitioning among subdomains affects the performance of the whole algorithm depending on the problem size and the number of available computing elements. As expected, we recognized a sort of trade-off between the overhead due to the workload repartitioning and the subsequent parallel computation. As in the assimilation window the number and the distribution of observations change, the difficulty to overcome is to implement a load balancing algorithm, which should have to dynamically allow each subdomain to move independently with time, that is, to balance observations with neighboring subdomains, at each instant time. We are working on extending DyDD framework to such configurations.

ACKNOWLEDGEMENTS

The authors thank the reviewers for their valuable comments.

    Biographies

    • Luisa D'Amore Luisa D'Amore has the degree in Mathematics, and the Ph.D. in Applied Mathematics and Computer Science. She is professor of Numerical Analysis at University of Naples Federico II. She is member of the Academic Board of the Ph.D. in Mathematics and Applications, at University of Naples Federico II where she teaches courses of Numerical Analysis, Scientific Computing and Parallel Computing. Research activity is placed in the context of Scientific Computing. Her main interest is devoted to designing effective numerical algorithms solving ill-posed inverse problems arising in the applications, such as image analysis, medical imaging, astronomy, digital restoration of films and data assimilation. The need of computing the numerical solution within a suitable time, requires the use of advanced computing architectures. This involves designing and development of parallel algorithms and software capable of exploiting the high performance of emerging computing infrastructures. Research produces a total of about 200 publications in refereed journals and conference proceedings.

    • Rosalba Cacciapuoti Rosalba Cacciapuoti received the degree in Mathematics at University of Naples Federico II. She is a student of the PhD course in Mathematics and Applications at the University of Naples, Federico II. Her research activity is focused on designing of parallel algorithms for solving data assimilation problems.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.