Volume 33, Issue 12 pp. 6734-6753
RESEARCH ARTICLE
Open Access

Design of input assignment and feedback gain for re-stabilizing undirected networks with High-Dimension Low-Sample-Size data

Hitoshi Yasukata

Hitoshi Yasukata

Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan

Search for more papers by this author
Xun Shen

Xun Shen

Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan

Search for more papers by this author
Hampei Sasahara

Corresponding Author

Hampei Sasahara

Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan

Correspondence Hampei Sasahara, Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, 2-12-1 Ookayama, Tokyo, 152-8522, Japan.

Email: [email protected]

Search for more papers by this author
Jun-ichi Imura

Jun-ichi Imura

Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan

Search for more papers by this author
Makito Oku

Makito Oku

Institute of Natural Medicine, University of Toyama, Toyama, Japan

Search for more papers by this author
Kazuyuki Aihara

Kazuyuki Aihara

International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan

Search for more papers by this author
First published: 25 April 2023

Abstract

There exists a critical transition before dramatic deterioration of a complex dynamical system. Recently, a method to predict such shifts based on High-Dimension Low-Sample-Size (HDLSS) data has been developed. Thus based on the prediction, it is important to make the system more stable by feedback control just before such critical transitions, which we call re-stabilization. However, the re-stabilization cannot be achieved by traditional stabilization methods such as pole placement method because the available HDLSS data is not enough to get a mathematical system model by system identification. In this article, a model-free pole placement method for re-stabilization is proposed to design the optimal input assignment and feedback gain for undirected network systems only with HDLSS data. The proposed method is validated by numerical simulations.

1 INTRODUCTION

There always exist abrupt critical transitions before system deterioration in various complex network systems such as ecological systems, climate systems, power systems, and biological systems.1 In these days, critical transition detection methods have been increasingly studied.2-5 Chen et al.5 have proposed a method, called Dynamical Network Biomarker (DNB) theory, for predicting a critical transition from a healthy stage to a disease stage of a dynamical network in a living organism only with very few samples, referred to as High-Dimension Low-Sample-Size (HDLSS) data. In DNB theory, the progression of a disease is modeled as a system parameter shift of nonlinear dynamical systems and the critical transition is regarded as a bifurcation, but any information of the system model is not required. In a situation where a bifurcation is about to occur in the biological network system, the system state assigned to the nodes that constitute the subnetwork, called DNB nodes, show large fluctuations. The fluctuations of the DNB nodes can be captured only with HDLSS data, and we can detect the stage just before the bifurcation occurs. DNB theory is applied to various fields other than system biology to predict deterioration of a network system, which is called Dynamical Network Marker (DNM) theory in general sense.6-9

The realization of critical transition detection stimulates research on methods to avoid the critical transition at the stage of the detection. Given the control theoretical approach, preventing a network system in a pre-deteriorating stage, which is the stage just before the critical transition occurs, from falling into a deteriorating stage is regarded as designing a controller that improves the stability degree by pole placement, which we call re-stabilization. If the mathematical model of the system is available, or can be identified from the measured data, then conventional control methods can be employed to enhance the stability degree of the system. However, it is difficult of obtain the entire system model for a large-scale complex network system. Moreover the available data is sometimes HDLSS, which is not enough to identify the dynamics.10, 11 For example, in the case of gene network systems, the dimension of microarray data of gene expression is almost tens of thousands while the sample size is a dozen or more at most.12

A few studies on re-stabilization can be found. A data-driven method to find the input assignment with maximal controllability Gramian has been proposed,13 which relates to minimal energy control. However, the method needs abundant data whose sample size is equal to the dimension of the state variable. Thus, the method cannot be applied to the case of HDLSS data. In a very recent study, approximate pole placement for gene network systems with HDLSS data has been proposed,14 based on Brauer's theorem.15 Although the method is suitable for HDLSS data, no theoretical analysis has been presented for exact characterization of the solution of the pole placement problem. Also, a single-input assignment design for undirected network systems with HDLSS data has been proposed,16 where the proposed input assignment is the optimal solution for minimizing the input energy. The research has also given the approximated control method. However, no theoretical discussion for the approximation has been developed.

In this article, for undirected network systems, we propose a method to design the optimal feedback gain and the optimal input assignment for feedback control that makes the system more stable under pre-deteriorating stage, called re-stabilization, without the system model which adopts two different criteria: minimizing input energy and minimizing the Frobenius norm of the feedback gain to be designed. Minimizing the input energy is one of the typical criterion for control problem since the control energy corresponds to the effort needed to control a network system.17 Minimizing the Frobenius norm of the feedback gain is also significant in the case of multi-input control, because small gains are beneficial to reduce energy consumption and noise amplification or improve the transient response, where the Frobenius norm is often used as a measure of the size of the gain.18-20 For each case, we show that the optimal feedback gain and the optimal input assignment which shift the dominant eigenvalue of the Jacobian matrix away from the imaginary axis can be designed by only using the information derived from the HDLSS data of the state if the system is under the pre-deteriorating stage. Furthermore, we prove that the solution for minimizing input energy and the solution for minimizing the Frobenius norm of the feedback gain are equivalent, that is, the two problems can be solved in a unified manner. We also propose a practical design method in which the optimal feedback gain is approximated to be sparse with an analytic evaluation of the error caused by the approximation. In addition, we provide the specific algorithm for designing the input assignment and feedback gain by using HDLSS time-series data of the system.

The rest of the article is organized as follows. In Section 2, we give system description, a brief review of pre-deteriorating stage detection with HDLSS data, and the design criterion with the formal assumption. Section 3 presents the proposed method for minimizing input energy and for minimizing the Frobenius norm of feedback gain, the evaluation of the error of the practical design method, and the algorithm for the design. In Section 4, we show the proposed method is effective by numerical simulations. Finally, we conclude the article in Section 5.

2 PROBLEM DESCRIPTION

2.1 System description

Figure 1A shows the progression of the deterioration of a complex network system. For example, it is known that deterioration of the gene regulatory networks may cause the organisms to fall into a disease stage.5 The progression can be divided into three stages: a normal stage, a pre-deteriorating stage, and a deteriorating stage. The deterioration of the network system of interest is not smooth but abrupt, that is, there exists a tipping point where the sudden shift from a normal stage to a deteriorating stage occurs. We define a pre-deteriorating stage as the limit of a normal stage immediately before the tipping point is reached. The process from a normal stage to a pre-deteriorating stage is reversible while the process from a pre-deteriorating stage to a deteriorating stage is irreversible or very difficult. Mathematically, a deterioration progression of a system is regarded as a slow change of a bifurcation parameter of a system.5 The dynamical evolution of the system for some value of a bifurcation parameter μ $$ \mu $$ is expressed as follows:
z ˙ ( t ) = f ( z ( t ) ; μ ) + ω ( t ) , $$ \dot{z}(t)=f\left(z(t);\mu \right)+\omega (t), $$ (1)
where z ( t ) n $$ z(t)\in {\mathbb{R}}^n $$ denotes the state of the system, f : n n $$ f:{\mathbb{R}}^n\to {\mathbb{R}}^n $$ is a nonlinear function, and ω ( t ) n $$ \omega (t)\in {\mathbb{R}}^n $$ is a white Gaussian noise with zero means and covariance matrix D n × n $$ D\in {\mathbb{R}}^{n\times n} $$ . Let z e ( μ ) $$ {z}_e\left(\mu \right) $$ be an equilibrium state with respect to the noiseless autonomous system z ˙ = f ( z ; μ ) $$ \dot{z}=f\left(z;\mu \right) $$ , and assume that f $$ f $$ can be linearly approximated in the neighborhood of z = z e ( μ ) $$ z={z}_e\left(\mu \right) $$ . Then, the linear approximation of the system around z e ( μ ) $$ {z}_e\left(\mu \right) $$ is written as
x ˙ ( t ) = A x ( t ) + ω ( t ) , $$ \dot{x}(t)= Ax(t)+\omega (t), $$ (2)
where x ( t ) = z ( t ) z e ( μ ) $$ x(t)=z(t)-{z}_e\left(\mu \right) $$ , A n × n $$ A\in {\mathbb{R}}^{n\times n} $$ is the Jacobian matrix f ( z , u ) z z = z e ( μ ) $$ {\left.\frac{\partial f\left(z,u\right)}{\partial z}\right|}_{z={z}_e\left(\mu \right)} $$ , which depends on the value of μ $$ \mu $$ . The eigenvalue of A $$ A $$ with the largest real part, called the dominant eigenvalue, determines the stability degree of the system. Let n $$ {\mathcal{M}}_{\mathrm{n}} $$ be an open set of the bifurcation parameter μ $$ \mu $$ such that the system is under a normal stage. When μ = μ n n $$ \mu ={\mu}_{\mathrm{n}}\in {\mathcal{M}}_{\mathrm{n}} $$ , the system is under a normal stage and the real part of the dominant eigenvalue of A $$ A $$ is negative and relatively further from zero. Thus, the equilibrium point z e ( μ n ) $$ {z}_e\left({\mu}_{\mathrm{n}}\right) $$ is stable and the fluctuation of the state caused by external noises is relatively small (see Figure 1B). Let n $$ \partial {\mathcal{M}}_{\mathrm{n}} $$ be the boundary of n $$ {\mathcal{M}}_{\mathrm{n}} $$ and μ pre $$ {\mu}_{\mathrm{pre}} $$ denotes a bifurcation parameter such that μ pre n $$ {\mu}_{\mathrm{pre}}\in {\mathcal{M}}_{\mathrm{n}} $$ and μ pre μ crit n $$ {\mu}_{\mathrm{pre}}\approx {\mu}_{\mathrm{crit}}\in \partial {\mathcal{M}}_{\mathrm{n}} $$ . When μ $$ \mu $$ reaches μ pre $$ {\mu}_{\mathrm{pre}} $$ , the system is under a pre-deteriorating stage and the real part of the dominant eigenvalue of A $$ A $$ is very close to zero. Thus, z e ( μ pre ) $$ {z}_e\left({\mu}_{\mathrm{pre}}\right) $$ is still stable but the system is sensitive to external noises, that is, the fluctuation becomes large (see Figure 1C). Although the equilibrium point z e ( μ ) $$ {z}_e\left(\mu \right) $$ changes as the change of the bifurcation parameter μ $$ \mu $$ , the change of z e ( μ ) $$ {z}_e\left(\mu \right) $$ is regarded as continuous and relatively small when μ n $$ \mu \in {\mathcal{M}}_{\mathrm{n}} $$ . On the other hand, when μ $$ \mu $$ reaches μ crit n $$ {\mu}_{\mathrm{crit}}\in \partial {\mathcal{M}}_{\mathrm{n}} $$ and gets out of n n $$ \partial {\mathcal{M}}_{\mathrm{n}}\cup {\mathcal{M}}_{\mathrm{n}} $$ , a critical transition occurs and the state can change abruptly to another equilibrium point. Then, the system becomes under a deteriorating stage, where the equilibrium point is far from z e ( μ n ) $$ {z}_e\left({\mu}_{\mathrm{n}}\right) $$ and z e ( μ pre ) $$ {z}_e\left({\mu}_{\mathrm{pre}}\right) $$ (see Figure 1D).
Details are in the caption following the image
Schematic illustration of the dynamical features of deterioration progression. (A) The progression from a normal stage to a deteriorating stage. (B) The potential function and the poles of the system under a normal stage. A normal stage is a steady state, where the system is robust and resilient. (C) The potential function and the poles of the system under a pre-deteriorating stage. A pre-deteriorating stage is situated just before the bifurcation, where the system shows large fluctuations due to noises. (D) The potential function and the poles of the system under a deteriorating stage. A deteriorating stage is a stable state, where the system is robust and resilient, similarly to a normal stage.

2.2 Pre-deteriorating stage detection with HDLSS data

A pre-deteriorating stage detection method has been proposed.5, 8, 9 Based on the theoretical analysis presented by Chen et al.,5 the system under a pre-deteriorating stage shows some generic properties: there exists a group of nodes whose standard deviation drastically increases, and the covariance between every pair within the group also increases drastically in absolute value. This group represents the dynamical features of the network system and the nodes in the group are expected to form a subnetwork. Then, this subnetwork is regarded as a DNM and a node in the subnetwork is called a DNM node. The existence of DNM nodes implies that the system is under the pre-deteriorating stage.

We provide the details of the theoretical analysis of the system properties under a pre-deteriorating stage, which is based on Lyapunov equation. If a network system is under a pre-deteriorating stage, the following assumption holds:

Assumption 1.The system matrix A n × n $$ A\in {\mathbb{R}}^{n\times n} $$ is stable and the dominant eigenvalue λ 1 $$ {\lambda}_1\in \mathbb{C} $$ satisfies Re ( λ 1 ) 0 $$ \operatorname{Re}\left({\lambda}_1\right)\approx 0 $$ .

Here, $$ \mathbb{C} $$ denotes the set of complex numbers. In addition, this article focuses on undirected networks, that is, the system matrix A $$ A $$ is symmetric and all the eigenvalues are real, which is described by the following technical assumption:

Assumption 2.The system matrix A $$ A $$ is symmetric and diagonalizable, and the eigenvalues λ i ( i = 1 , , n ) $$ {\lambda}_i\kern0.3em \left(i=1,\dots, n\right) $$ satisfy λ n < λ n 1 < < λ 2 < λ 1 $$ {\lambda}_n<{\lambda}_{n-1}<\cdots <{\lambda}_2<{\lambda}_1 $$ .

Since A n × n $$ A\in {\mathbb{R}}^{n\times n} $$ is symmetric and diagonalizable, there exists an orthogonal matrix V n × n $$ V\in {\mathbb{R}}^{n\times n} $$ that satisfies A = V Λ V T $$ A=V\Lambda {V}^T $$ , where Λ = diag { λ 1 , λ 2 , , λ n } $$ \Lambda =\operatorname{diag}\left\{{\lambda}_1,{\lambda}_2,\dots, {\lambda}_n\right\} $$ and the column vectors are normalized. Note that the column vectors of V $$ V $$ are the eigenvectors of A $$ A $$ . Let C n × n $$ C\in {\mathbb{R}}^{n\times n} $$ be the covariance matrix of x $$ x $$ at steady state. Then, for the element of C $$ C $$ , if V ( i , 1 ) 0 $$ V\left(i,1\right)\ne 0 $$ and V ( j , 1 ) 0 $$ V\left(j,1\right)\ne 0 $$ , the following holds:21
lim λ 1 0 | C ( i , j ) | = , $$ \underset{\lambda_1\to -0}{\lim}\mid C\left(i,j\right)\mid =\infty, $$ (3)
where λ 1 0 $$ {\lambda}_1\to -0 $$ denotes the one-sided limit of λ 1 $$ {\lambda}_1 $$ from below. Equation (3) implies the generic properties of a gene network system. If V ( i , 1 ) 0 $$ V\left(i,1\right)\ne 0 $$ , the standard deviation of node i $$ i $$ is extremely large at the pre-deteriorating stage. Furthermore, if V ( i , 1 ) 0 $$ V\left(i,1\right)\ne 0 $$ and V ( j , 1 ) 0 $$ V\left(j,1\right)\ne 0 $$ , the covariance of node i $$ i $$ and node j $$ j $$ is extremely large in absolute value under the pre-deteriorating stage.

From the above discussion, node i $$ i $$ such that V ( i , 1 ) 0 $$ V\left(i,1\right)\ne 0 $$ shows large fluctuation under the pre-deteriorating stage. Note that V ( i , 1 ) $$ V\left(i,1\right) $$ is the i $$ i $$ th element of the eigenvector of A $$ A $$ corresponding to the dominant eigenvalue λ 1 $$ {\lambda}_1 $$ . Here, we define the dominant eigenvector as follows:

Definition 1.The vector v 1 $$ {v}_1 $$ is called the dominant eigenvector of A $$ A $$ if A v 1 = λ 1 v 1 $$ A{v}_1={\lambda}_1{v}_1 $$ for the dominant eigenvalue λ 1 $$ {\lambda}_1 $$ of A $$ A $$ .

Then, the nodes corresponding to the non-zero element of the dominant eigenvector are with large fluctuations under the pre-deteriorating stage. We can capture the large fluctuation and identify the dominant eigenvector only with HDLSS data following the procedure explained in Section 3.1.

2.3 Design criterion

After detecting a pre-deteriorating stage, the next goal is to prevent the system from shifting to a deteriorating stage and bring it back to the normal stage, which we refer to as re-stabilization in this article. When the system is under a pre-deteriorating stage, the real part of the dominant eigenvalue of the system matrix, which corresponds to the linearization around z e ( μ pre ) $$ {z}_e\left({\mu}_{\mathrm{pre}}\right) $$ , is close to zero. If the dominant eigenvalue of the system matrix is shifted away from zero, the system matrix becomes more stable. Then, the original nonlinear system also becomes more stable where the equilibrium point is still close to z e ( μ pre ) $$ {z}_e\left({\mu}_{\mathrm{pre}}\right) $$ , that is, the re-stabilization can be achieved. Therefore, we consider the problem to shift the dominant eigenvalue away from zero based on pole placement method.

We formulate the re-stabilization problem assuming that some parameters of the system matrix can be adjusted. For example, in gene regulatory networks, gene knockdown, and gene overexpression can be implemented to adjust the system parameters.22 Naturally, two questions arise: which nodes should be targeted? How much the corresponding parameters should be adjusted? By regarding the perturbation matrix as a multiplication of the system input matrix and a feedback gain matrix, we address the questions as a problem of designing these two matrices.

The system (2) with an input term is written as follows:
x ˙ ( t ) = A x ( t ) + B u ( t ) + ω ( t ) , $$ \dot{x}(t)= Ax(t)+ Bu(t)+\omega (t), $$ (4)
where u m $$ u\in {\mathbb{R}}^m $$ is an input vector and B n × m $$ B\in {\mathbb{R}}^{n\times m} $$ is an input assignment, which can be designed arbitrarily. For the system matrix A $$ A $$ which satisfies Assumptions 1 and 2, we set the requirements for a pole placement as follows:
  • The dominant eigenvalue λ 1 $$ {\lambda}_1 $$ is shifted to λ 1 λ shift $$ {\lambda}_1-{\lambda}_{\mathrm{shift}} $$ ;
  • The eigenvalues λ 2 , λ 3 , , λ n $$ {\lambda}_2,{\lambda}_3,\dots, {\lambda}_n $$ and the corresponding eigenvectors do not change,
where λ shift $$ {\lambda}_{\mathrm{shift}} $$ is a positive real number and satisfies λ 1 λ shift > λ 2 $$ {\lambda}_1-{\lambda}_{\mathrm{shift}}>{\lambda}_2 $$ . Then, let 𝒦 ( B , λ shift ) be the set of feedback gains K m × n $$ K\in {\mathbb{R}}^{m\times n} $$ that realize a pole placement that satisfies the above requirements for a given input assignment B $$ B $$ and λ shift $$ {\lambda}_{\mathrm{shift}} $$ . The state feedback with respect to K 𝒦 ( B , λ shift ) is written as follows:
u = K x . $$ u= Kx. $$ (5)
We define two cost functions and determine the optimal input assignment and the optimal feedback gain with respect to each function. One is the input energy
J 1 : = 0 u T u d t , $$ {J}_1:= {\int}_0^{\infty }{u}^Tu\kern0.3em dt, $$ (6)
with ω ( t ) = 0 $$ \omega (t)=0 $$ , and the other one is the Frobenius norm of feedback gain
J 2 : = K F . $$ {J}_2:= {\left\Vert K\right\Vert}_F. $$ (7)
Here, J 1 $$ {J}_1 $$ depends on the initial value of x ( t ) $$ x(t) $$ . Minimizing the Frobenius norm of feedback gain corresponds to minimizing the amount of change of the value of components of the system matrix, which is supposed to reduce the load of the system caused by the input. Let B $$ {B}^{\ast } $$ and K $$ {K}^{\ast } $$ be the optimal B $$ B $$ and K $$ K $$ , which minimize J 1 $$ {J}_1 $$ for all the initial state x ( 0 ) $$ x(0) $$ , or J 2 $$ {J}_2 $$ . Note that K $$ {K}^{\ast } $$ depends on B $$ {B}^{\ast } $$ since feedback gains that realize the above pole placement depend on B $$ B $$ .

The available data for the design of B $$ B $$ and K $$ K $$ is the time-series data of all the nodes at the steady state with sample size N $$ N $$ , which is expressed as Z ˜ N = z ( t 0 ) , , z ( t N 1 ) $$ {\tilde{Z}}_N=\left\{z\left({t}_0\right),\dots, z\left({t}_{N-1}\right)\right\} $$ . For some systems, such as power systems or biological systems, the sample size N $$ N $$ is much smaller than the dimension n $$ n $$ of z ( t ) $$ z(t) $$ .10-12 Data with such property is called HDLSS data, defined as follows:

Definition 2.Data Z ˜ N = z ( t 0 ) , , z ( t N 1 ) $$ {\tilde{Z}}_N=\left\{z\left({t}_0\right),\dots, z\left({t}_{N-1}\right)\right\} $$ is called HDLSS data if the sample size N $$ N $$ and the dimension n $$ n $$ of z ( t ) $$ z(t) $$ satisfy N n $$ N\ll n $$ .

Since the measured data with a sample size greater than or equal to the dimension of z ( t ) $$ z(t) $$ are required for system identification, we cannot identify the system matrix A $$ A $$ by using HDLSS data. On the other hand, we can obtain the HDLSS data of the state x ( t ) = z ( t ) z e $$ x(t)=z(t)-{z}_e $$ since we can estimate the equilibrium point z e $$ {z}_e $$ from the steady state data of z ( t ) $$ z(t) $$ .

Summarizing the above discussion, the problem we address here is to design the appropriate input assignment B $$ B $$ and the appropriate feedback gain K $$ K $$ with respect to minimizing J 1 $$ {J}_1 $$ or J 2 $$ {J}_2 $$ under Assumptions 1 and 2.

3 PROPOSED METHOD

In this section, we present a method for designing the appropriate input assignment and the appropriate feedback gain, which can be achieved only with the HDLSS data of x $$ x $$ .

3.1 Available information

According to the existing results,21, 23 we can obtain some of the essential information about the system matrix A $$ A $$ from HDLSS data of x ( t ) $$ x(t) $$ . Let C n × n $$ C\in {\mathbb{R}}^{n\times n} $$ be the covariance matrix of x $$ x $$ at steady state. Then, under Assumption 1, the following holds:21
lim λ 1 0 ( 2 λ 1 ) C v 1 v 1 T . $$ \underset{\lambda_1\to 0}{\lim}\left(-2{\lambda}_1\right)C\propto {v}_1{v}_1^T. $$ (8)
This relation implies that the dominant eigenvector of C $$ C $$ converges to the dominant eigenvector v 1 $$ {v}_1 $$ of A $$ A $$ as λ 1 $$ {\lambda}_1 $$ approaches to zero. Since λ 1 0 $$ {\lambda}_1\to 0 $$ corresponds to μ μ crit $$ \mu \to {\mu}_{\mathrm{crit}} $$ , if the system is at a pre-deteriorating stage, that is, μ = μ pre $$ \mu ={\mu}_{\mathrm{pre}} $$ , the dominant eigenvector of C $$ C $$ is approximately equal to v 1 $$ {v}_1 $$ of A $$ A $$ with μ = μ pre $$ \mu ={\mu}_{\mathrm{pre}} $$ . In addition, the formula (8) also implies that η 1 $$ {\eta}_1\to \infty $$ and rank ( C ) 1 $$ \operatorname{rank}(C)\to 1 $$ as λ 1 0 $$ {\lambda}_1\to 0 $$ , where η 1 $$ {\eta}_1 $$ denotes the dominant eigenvalue of C $$ C $$ . Although the covariance matrix C $$ C $$ of x $$ x $$ is not available with HDLSS data of x $$ x $$ , we can estimate the dominant eigenvector of C $$ C $$ under certain conditions: if η 1 $$ {\eta}_1 $$ is sufficiently large and outstanding compared to the other eigenvalues of the covariance matrix C $$ C $$ , the dominant eigenvector of the sample covariance matrix C ˜ $$ \tilde{C} $$ of HDLSS data of x $$ x $$ can be treated as an approximated vector of the dominant eigenvector of C $$ C $$ .23 Therefore, we can obtain an approximated vector of the dominant eigenvector v 1 $$ {v}_1 $$ of A $$ A $$ with HDLSS data of x $$ x $$ under a pre-deteriorating stage.

3.2 Problem formulation

From the above discussion, under Assumptions 1 and 2, we assume that v 1 $$ {v}_1 $$ is available. Therefore, for minimizing J 1 $$ {J}_1 $$ , the problem we address is formulated as follows:

Problem 1.For the system (4), suppose Assumptions 1 and 2 hold and the dominant eigenvector v 1 $$ {v}_1 $$ of A $$ A $$ is available. Then, derive the optimal solution ( B , K ) $$ \left({B}^{\ast },{K}^{\ast}\right) $$ of the following optimization problem (P1) by using v 1 $$ {v}_1 $$ :

variables B , K minimize J 1 for all x ( 0 ) n subject to B , K 𝒦 ( B , λ shift ) , (P1)
where $$ \mathcal{B} $$ is some feasible set of B $$ B $$ .

Similarly, we call Problem 2 for the problem where “ J 1 $$ {J}_1 $$ for all x ( 0 ) n $$ x(0)\in {\mathbb{R}}^n $$ ” is replaced by “ J 2 $$ {J}_2 $$ ” in Problem 1.

3.3 Optimal design with the dominant eigenvector of the system matrix

For an arbitrary feasible set $$ \mathcal{B} $$ , Problems 1 and 2 can be solved by using the following theorem:

Theorem 1.For the system (4), suppose Assumptions 1 and 2 hold. Then, the optimization problem (P1) is equivalent to the optimization problem in Problem 2, and that solution ( B , K ) $$ \left({B}^{\ast },{K}^{\ast}\right) $$ is given as follows:

B = arg max B v 1 T B , $$ {B}^{\ast }=\underset{B\in \mathcal{B}}{\arg\;\max}\left\Vert {v}_1^TB\right\Vert, \kern0.5em $$ (9)
K = λ shift v 1 T B 2 B T v 1 v 1 T , $$ {K}^{\ast }=-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^T{B}^{\ast}\right\Vert}^2}{B^{\ast}}^T{v}_1{v}_1^T,\kern0.5em $$ (10)
where $$ \mathcal{B} $$ is an arbitrary feasible set of B $$ B $$ and v 1 $$ {v}_1 $$ is the dominant eigenvector of the system matrix A $$ A $$ .

Proof of Theorem 1.Because A $$ A $$ is a real symmetric matrix, there exists an orthogonal matrix V $$ V $$ such that

A = V Λ V T , $$ A=V\Lambda {V}^T,\kern0.5em $$ (11)
Λ = diag { λ 1 , λ 2 , , λ n } , $$ \Lambda =\operatorname{diag}\left\{{\lambda}_1,{\lambda}_2,\dots, {\lambda}_n\right\},\kern0.5em $$ (12)
and its column vectors are normalized. Now, let  $$ \hat{A} $$ be the system matrix of the closed-loop system with the state feedback (5). Then, the following equation holds:
 = A + B K = V Λ V T + B K . $$ {\displaystyle \begin{array}{ll}\hat{A}& =A+ BK\\ {}& =V\Lambda {V}^T+ BK.\end{array}} $$ (13)
The condition that the dominant eigenvalue λ 1 $$ {\lambda}_1 $$ is shifted to λ 1 λ shift $$ {\lambda}_1-{\lambda}_{\mathrm{shift}} $$ and the other eigenvalues and its corresponding eigenvectors are not changed is equivalent to the condition that there exists a vector v ^ $$ \hat{v} $$ such that
 v ^ = ( λ 1 λ shift ) v ^ , $$ \hat{A}\hat{v}\kern0.5em =\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)\hat{v}, $$ (14)
 v i = λ i v i ( i = 2 , , n ) , $$ \hat{A}{v}_i\kern0.5em ={\lambda}_i{v}_i\kern1em \left(i=2,\dots, n\right), $$ (15)
where v i ( i = 2 , , n ) $$ {v}_i\kern0.3em \left(i=2,\dots, n\right) $$ is the i $$ i $$ th column vector of V $$ V $$ . Note that v ^ $$ \hat{v} $$ is determined by ( B , K ) $$ \left({B}^{\ast },{K}^{\ast}\right) $$ but still has the arbitrary of constant multiple. By using (13), these equations are written in one equation as follows:
B K v ^ v 2 v n = V ( Λ ( λ 1 λ shift ) I ) V T v ^ O O . $$ BK\left[\hat{v}\kern0.5em {v}_2\kern0.5em \cdots \kern0.5em {v}_n\right]=\left[\begin{array}{cccc}-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T\hat{v}& O& \cdots & O\end{array}\right]. $$ (16)

On the other hand, because the eigenvalues of  $$ \hat{A} $$ are all real and distinct, there exists a non-singular matrix V ^ $$ \hat{V} $$ such that

V ^ = v ^ v 2 v n , $$ \hat{V}\kern0.5em =\left[\begin{array}{cccc}\hat{v}& {v}_2& \cdots & {v}_n\end{array}\right], $$ (17)
 = V ^ Λ ^ V ^ 1 , $$ \hat{A}\kern0.5em =\hat{V}\hat{\Lambda}{\hat{V}}^{-1}, $$ (18)
Λ ^ = diag { λ 1 λ shift , λ 2 , , λ n } . $$ \hat{\Lambda}=\operatorname{diag}\left\{{\lambda}_1-{\lambda}_{\mathrm{shift}},{\lambda}_2,\dots, {\lambda}_n\right\}.\kern0.5em $$ (19)
Now, let ŵ 1 T $$ {\hat{w}}_1^T $$ be the first row vector of V ^ 1 $$ {\hat{V}}^{-1} $$ . Then, from V ^ 1 V ^ = I $$ {\hat{V}}^{-1}\hat{V}=I $$ , the following equation holds:
ŵ 1 T V ^ = ŵ 1 T v ^ ŵ 1 T v 2 ŵ 1 T v n = 1 0 0 . $$ {\hat{w}}_1^T\hat{V}=\left[{\hat{w}}_1^T\hat{v}\kern0.5em {\hat{w}}_1^T{v}_2\kern0.5em \cdots \kern0.5em {\hat{w}}_1^T{v}_n\right]=\left[\begin{array}{cccc}1& 0& \cdots & 0\end{array}\right]. $$ (20)
Therefore, we obtain
ŵ 1 T v ^ = 1 , $$ {\hat{w}}_1^T\hat{v}\kern0.5em =1, $$ (21)
ŵ 1 T v i = 0 ( i = 2 , , n ) . $$ {\hat{w}}_1^T{v}_i\kern0.5em =0\kern1em \left(i=2,\dots, n\right). $$ (22)
(22) implies that ŵ 1 $$ {\hat{w}}_1 $$ is orthogonal to v i ( i = 2 , , n ) $$ {v}_i\kern0.3em \left(i=2,\dots, n\right) $$ . Since v 1 , , v n $$ {v}_1,\dots, {v}_n $$ are orthogonal to each other, the following holds:
ŵ 1 Span { v 1 } . $$ {\hat{w}}_1\in \mathrm{Span}\left\{{v}_1\right\}. $$ (23)
Furthermore, because of the arbitrary of constant multiple of v ^ $$ \hat{v} $$ , there exists the vector v ^ $$ \hat{v} $$ such that
v 1 T v ^ = 1 . $$ {v}_1^T\hat{v}=1. $$ (24)
In this case, by using (21) and (23), the following holds:
ŵ 1 = v 1 . $$ {\hat{w}}_1={v}_1. $$ (25)
Then, by using (16), (17), and (25), we have
B K = V ( Λ ( λ 1 λ shift ) I ) V T v ^ O O V ^ 1 = V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T . $$ {\displaystyle \begin{array}{ll} BK& =\left[-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T\hat{v}\kern0.5em O\kern0.5em \cdots \kern0.5em O\right]{\hat{V}}^{-1}\\ {}& =-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T\hat{v}{v}_1^T.\end{array}} $$ (26)
From the above, the following formula holds:
K 𝒦 ( B , λ shift ) v ^ ; B K = V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T , v 1 T v ^ = 1 . (27)
Moreover, by substituting (26) to (13) and using (24), we obtain (14) and (15). Thus, the following holds:
K 𝒦 ( B , λ shift ) v ^ ; B K = V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T , v 1 T v ^ = 1 . (28)
Then, the optimization problem (P1) is rewritten as follows:
variables B , K minimize J 1 for all x ( 0 ) n subject to B , v ^ ; B K = V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T , v 1 T v ^ = 1 . $$ \left|\kern1em \begin{array}{cccc}& \mathrm{variables}& & B,K\\ {}& \operatorname{minimize}& & {J}_1\kern0.3em \mathrm{for}\kern0.3em \mathrm{all}\kern0.3em x(0)\in {\mathbb{R}}^n\\ {}& \mathrm{subject}\kern0.3em \mathrm{to}& & B\in \mathcal{B}, \\ {}& & & \exists \hat{v};\kern0.3em BK=-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T\hat{v}{v}_1^T,\kern0.3em {v}_1^T\hat{v}=1.\end{array}\right. $$ (P2)
Similarly, the optimization problem for minimizing J 2 $$ {J}_2 $$ is written as follows:
variables B , K minimize J 2 subject to B , v ^ ; B K = V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T , v 1 T v ^ = 1 . $$ \left|\kern1em \begin{array}{cccc}& \mathrm{variables}& & B,K\\ {}& \operatorname{minimize}& & {J}_2\\ {}& \mathrm{subject}\kern0.3em \mathrm{to}& & B\in \mathcal{B}, \\ {}& & & \exists \hat{v};\kern0.3em BK=-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T\hat{v}{v}_1^T,\kern0.3em {v}_1^T\hat{v}=1.\end{array}\right. $$ (P3)

Now, for the optimization problem (P2), we fix the input assignment B $$ B $$ and consider the following subproblem:

variables K minimize J 1 for all x ( 0 ) n subject to v ^ ; B K = V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T , v 1 T v ^ = 1 . $$ \left|\kern1em \begin{array}{cccc}& \mathrm{variables}& & K\\ {}& \operatorname{minimize}& & {J}_1\kern0.3em \mathrm{for}\kern0.3em \mathrm{all}\kern0.3em x(0)\in {\mathbb{R}}^n\\ {}& \mathrm{subject}\kern0.3em \mathrm{to}& & \exists \hat{v};\kern0.3em BK=-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T\hat{v}{v}_1^T,\kern0.3em {v}_1^T\hat{v}=1.\end{array}\right. $$ (P4)
We define the following feedback gain:
K : = λ shift v 1 T B 2 B T v 1 v 1 T . $$ {K}^{\circ}:= -\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{B}^T{v}_1{v}_1^T. $$ (29)
Then, K $$ {K}^{\circ } $$ is a feasible solution of the subproblem (P4) because, if we assume
v ^ = v ^ : = λ shift v 1 T B 2 V ( Λ ( λ 1 λ shift ) I ) 1 V T B B T v 1 , $$ \hat{v}={\hat{v}}^{\circ}:= \frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}V{\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}{V}^TB{B}^T{v}_1, $$ (30)
we have
V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T = V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T = λ shift v 1 T B 2 B B T v 1 v 1 T = B K , $$ -V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T\hat{v}{v}_1^T=-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T{\hat{v}}^{\circ }{v}_1^T=-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}B{B}^T{v}_1{v}_1^T=B{K}^{\circ },\kern0.5em $$ (31)
v 1 T v ^ = v 1 T v ^ = λ shift v 1 T B 2 v 1 T V ( Λ ( λ 1 λ shift ) I ) 1 V T B B T v 1 = 1 v 1 T B 2 v 1 B B T v 1 = 1 . $$ {v}_1^T\hat{v}={v}_1^T{\hat{v}}^{\circ }=\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{v}_1^TV{\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}{V}^TB{B}^T{v}_1=\frac{1}{{\left\Vert {v}_1^TB\right\Vert}^2}{v}_1B{B}^T{v}_1=1.\kern0.5em $$ (32)
Furthermore, all K 𝒦 ( B , λ shift ) can be expressed with a perturbation term K p $$ {K}_p $$ as follows:
K = K + K p . $$ K={K}^{\circ }+{K}_p. $$ (33)
In addition, the corresponding vector v ^ $$ \hat{v} $$ is expressed as follows:
v ^ = v ^ + v ^ p , $$ \hat{v}={\hat{v}}^{\circ }+{\hat{v}}_p, $$ (34)
where v ^ p $$ {\hat{v}}_p $$ is a perturbation term. In this case, with the constraints of the subproblem (P4), the following hold:
B ( K + K p ) = V ( Λ ( λ 1 λ shift ) I ) V T ( v ^ + v ^ p ) v 1 T , $$ \kern0.5em B\left({K}^{\circ }+{K}_p\right)=-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T\left({\hat{v}}^{\circ }+{\hat{v}}_p\right){v}_1^T, $$ (35)
v 1 T ( v ^ + v ^ p ) = 1 . $$ \kern0.5em {v}_1^T\left({\hat{v}}^{\circ }+{\hat{v}}_p\right)=1. $$ (36)
By using (31) and (32), the lefthand sides of (35) and (36) become
B ( K + K p ) = V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T + B K p , $$ B\left({K}^{\circ }+{K}_p\right)\kern0.5em =-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T{\hat{v}}^{\circ }{v}_1^T+B{K}_p, $$ (37)
v 1 T ( v ^ + v ^ p ) = 1 + v 1 T v ^ p . $$ {v}_1^T\left({\hat{v}}^{\circ }+{\hat{v}}_p\right)\kern0.5em =1+{v}_1^T{\hat{v}}_p. $$ (38)
Thus, we obtain
B K p = V ( Λ ( λ 1 λ shift ) I ) V T v ^ p v 1 T , $$ B{K}_p\kern0.5em =-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T{\hat{v}}_p{v}_1^T, $$ (39)
v 1 T v ^ p = 0 . $$ {v}_1^T{\hat{v}}_p\kern0.5em =0. $$ (40)
Now, the input u $$ u $$ for an initial state x 0 : = x ( 0 ) $$ {x}_0:= x(0) $$ is written as follows:
u = K x = ( K + K p ) V ^ e Λ ^ t V ^ 1 x 0 = λ shift v 1 T B 2 B T v 1 v 1 T v ^ v 2 v n e ( λ 1 λ shift ) t O O ŵ 1 T x 0 + K p V ^ e Λ ^ t V ^ 1 x 0 . $$ {\displaystyle \begin{array}{ll}u& = Kx\\ {}& =\left({K}^{\circ }+{K}_p\right)\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\\ {}& =-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{B}^T{v}_1{v}_1^T\left[\hat{v}\kern0.5em {v}_2\kern0.5em \cdots \kern0.5em {v}_n\right]\left[\begin{array}{cc}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}& O\\ {}O& \ast \end{array}\right]\left[\begin{array}{c}{\hat{w}}_1^T\\ {}\ast \end{array}\right]{x}_0+{K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0.\end{array}} $$ (41)
Since v 1 $$ {v}_1 $$ is orthogonal to v 2 , , v n $$ {v}_2,\dots, {v}_n $$ , by using (24) and (25), we get
u = λ shift v 1 T B 2 B T v 1 1 0 0 e ( λ 1 λ shift ) t O O v 1 T x 0 + K p V ^ e Λ ^ t V ^ 1 x 0 = λ shift v 1 T B 2 e ( λ 1 λ shift ) t B T v 1 v 1 T x 0 + K p V ^ e Λ ^ t V ^ 1 x 0 . $$ {\displaystyle \begin{array}{ll}u& =-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{B}^T{v}_1\left[1\kern0.5em 0\kern0.5em \cdots \kern0.5em 0\right]\left[\begin{array}{cc}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}& O\\ {}O& \ast \end{array}\right]\left[\begin{array}{c}{v}_1^T\\ {}\ast \end{array}\right]{x}_0+{K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\\ {}& =-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{B}^T{v}_1{v}_1^T{x}_0+{K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0.\end{array}} $$ (42)
Therefore, the integrand of the input energy, that is, u T u $$ {u}^Tu $$ is calculated as follows:
u T u = λ shift v 1 T B 2 e ( λ 1 λ shift ) t x 0 T v 1 v 1 T B + x 0 T ( V ^ 1 ) T e Λ ^ t V ^ T K p T λ shift v 1 T B 2 e ( λ 1 λ shift ) t B T v 1 v 1 T x 0 + K p V ^ e Λ ^ t V ^ 1 x 0 = λ shift v 1 T B 2 e ( λ 1 λ shift ) t 2 x 0 T v 1 v 1 T B 2 v 1 T x 0 2 λ shift v 1 T B 2 e ( λ 1 λ shift ) t x 0 T v 1 v 1 T B K p V ^ e Λ ^ t V ^ 1 x 0 + K p V ^ e Λ ^ t V ^ 1 x 0 2 = λ shift v 1 T x 0 v 1 T B e ( λ 1 λ shift ) t 2 2 λ shift v 1 T B 2 e ( λ 1 λ shift ) t x 0 T v 1 v 1 T B K p V ^ e Λ ^ t V ^ 1 x 0 + K p V ^ e Λ ^ t V ^ 1 x 0 2 . $$ {\displaystyle \begin{array}{ll}{u}^Tu& =\left(-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{x}_0^T{v}_1{v}_1^TB+{x}_0^T{\left({\hat{V}}^{-1}\right)}^T{e}^{\hat{\Lambda}t}{\hat{V}}^T{K}_p^T\right)\left(-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{B}^T{v}_1{v}_1^T{x}_0+{K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\right)\\ {}& ={\left(\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}\right)}^2{x}_0^T{v}_1{\left\Vert {v}_1^TB\right\Vert}^2{v}_1^T{x}_0-2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{x}_0^T{v}_1{v}_1^TB{K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0+{\left\Vert {K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\right\Vert}^2\\ {}& ={\left(\frac{\lambda_{\mathrm{shift}}{v}_1^T{x}_0}{\left\Vert {v}_1^TB\right\Vert }{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}\right)}^2-2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{x}_0^T{v}_1{v}_1^TB{K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0+{\left\Vert {K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\right\Vert}^2.\end{array}} $$ (43)
By using (39) and (40), the second term of the righthand side of (43) becomes
2 λ shift v 1 T B 2 e ( λ 1 λ shift ) t x 0 T v 1 v 1 T B K p V ^ e Λ ^ t V ^ 1 x 0 = 2 λ shift v 1 T B 2 e ( λ 1 λ shift ) t x 0 T v 1 v 1 T V ( Λ ( λ 1 λ shift ) I ) V T v ^ p v 1 T V ^ e Λ ^ t V ^ 1 x 0 = 2 λ shift v 1 T B 2 e ( λ 1 λ shift ) t x 0 T v 1 e 1 T ( Λ ( λ 1 λ shift ) I ) V T v ^ p v 1 T V ^ e Λ ^ t V ^ 1 x 0 = 2 λ shift 2 v 1 T B 2 e ( λ 1 λ shift ) t x 0 T v 1 e 1 T V T v ^ p v 1 T V ^ e Λ ^ t V ^ 1 x 0 = 2 λ shift 2 v 1 T B 2 e ( λ 1 λ shift ) t x 0 T v 1 v 1 T v ^ p v 1 T V ^ e Λ ^ t V ^ 1 x 0 = 0 . $$ {\displaystyle \begin{array}{ll}-2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{x}_0^T{v}_1{v}_1^TB{K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0& =2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{x}_0^T{v}_1{v}_1^TV\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T{\hat{v}}_p{v}_1^T\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\\ {}& =2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{x}_0^T{v}_1{e}_1^T\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T{\hat{v}}_p{v}_1^T\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\\ {}& =2\frac{\lambda_{\mathrm{shift}}^2}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{x}_0^T{v}_1{e}_1^T{V}^T{\hat{v}}_p{v}_1^T\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\\ {}& =2\frac{\lambda_{\mathrm{shift}}^2}{{\left\Vert {v}_1^TB\right\Vert}^2}{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}{x}_0^T{v}_1{v}_1^T{\hat{v}}_p{v}_1^T\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\\ {}& =0.\end{array}} $$ (44)
Thus, the cost function J 1 $$ {J}_1 $$ is written as follows:
J 1 = 0 u T u d t = 0 λ shift v 1 T x 0 v 1 T B e ( λ 1 λ shift ) t 2 + K p V ^ e Λ ^ t V ^ 1 x 0 2 d t = λ shift v 1 T x 0 v 1 T B 2 0 e 2 ( λ 1 λ shift ) t d t + 0 K p V ^ e Λ ^ t V ^ 1 x 0 2 d t . $$ {\displaystyle \begin{array}{ll}{J}_1& ={\int}_0^{\infty }{u}^Tu\kern0.3em dt\\ {}& ={\int}_0^{\infty}\left({\left(\frac{\lambda_{\mathrm{shift}}{v}_1^T{x}_0}{\left\Vert {v}_1^TB\right\Vert }{e}^{\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}\right)}^2+{\left\Vert {K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\right\Vert}^2\right) dt\\ {}& ={\left(\frac{\lambda_{\mathrm{shift}}{v}_1^T{x}_0}{\left\Vert {v}_1^TB\right\Vert}\right)}^2{\int}_0^{\infty }{e}^{2\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}\kern0.3em dt+{\int}_0^{\infty }{\left\Vert {K}_p\hat{V}{e}^{\hat{\Lambda}t}{\hat{V}}^{-1}{x}_0\right\Vert}^2 dt.\end{array}} $$ (45)
Since the second term of righthand side of (45) is non-negative, K p = O $$ {K}_p=O $$ minimizes J 1 $$ {J}_1 $$ for all x 0 n $$ {x}_0\in {\mathbb{R}}^n $$ . From the above, K = K $$ K={K}^{\circ } $$ is the optimal solution of the subproblem (P4). Then, the optimization problem (P2) is equivalent to the following optimization problem:
variables B , K minimize J 1 = λ shift v 1 T x 0 v 1 T B 2 0 e 2 ( λ 1 λ shift ) t d t for all x 0 n subject to B , K = λ shift v 1 T B 2 B T v 1 v 1 T . $$ \left|\kern1em \begin{array}{cccc}& \mathrm{variables}& & B,K\\ {}& \operatorname{minimize}& & {J}_1={\left(\frac{\lambda_{\mathrm{shift}}{v}_1^T{x}_0}{\left\Vert {v}_1^TB\right\Vert}\right)}^2{\int}_0^{\infty }{e}^{2\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)t}\kern0.3em dt\kern0.3em \mathrm{for}\kern0.3em \mathrm{all}\kern0.3em {x}_0\in {\mathbb{R}}^n\\ {}& \mathrm{subject}\kern0.3em \mathrm{to}& & B\in \mathcal{B}, \\ {}& & & K=-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{B}^T{v}_1{v}_1^T.\end{array}\right. $$ (P5)
Obviously, the optimal solution of (P5) is written as follows:
B = arg max B v 1 T B , $$ {B}^{\ast}\kern0.5em =\underset{B\in \mathcal{B}}{\arg\;\max}\kern0.2em \left\Vert {v}_1^TB\right\Vert, $$ (46)
K = λ shift v 1 T B 2 B T v 1 v 1 T . $$ {K}^{\ast}\kern0.5em =-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^T{B}^{\ast}\right\Vert}^2}{B^{\ast}}^T{v}_1{v}_1^T. $$ (47)
Hence, ( B , K ) $$ \left({B}^{\ast },{K}^{\ast}\right) $$ is the optimal solution of the optimization problem (P1).

On the other hand, we similarly consider the following subproblem of the optimization problem (P3) for given B $$ B $$ :

variables K minimize J 2 subject to v ^ ; B K = V ( Λ ( λ 1 λ shift ) I ) V T v ^ v 1 T , v 1 T v ^ = 1 . $$ \left|\kern1em \begin{array}{cccc}& \mathrm{variables}& & K\\ {}& \operatorname{minimize}& & {J}_2\\ {}& \mathrm{subject}\kern0.3em \mathrm{to}& & \exists \hat{v};\kern0.3em BK=-V\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T\hat{v}{v}_1^T,\kern0.3em {v}_1^T\hat{v}=1.\end{array}\right. $$ (P6)
Because the constraint of this problem is equivalent to that of the subproblem (P4), K $$ {K}^{\circ } $$ is also a feasible solution of this problem. Moreover, by using the same expressions of (33) and (34), the square of K F $$ {\left\Vert K\right\Vert}_F $$ is written as follows:
K F 2 = tr ( K T K ) = tr ( K T + K p T ) ( K + K p ) = λ shift v 1 T B 2 2 tr ( v 1 v 1 T B B T v 1 v 1 T ) 2 λ shift v 1 T B 2 tr ( v 1 v 1 T B K p ) + tr ( K p T K p ) = λ shift v 1 T B 2 2 λ shift v 1 T B 2 tr ( v 1 v 1 T B K p ) + K p F 2 . $$ {\displaystyle \begin{array}{ll}{\left\Vert K\right\Vert}_F^2& =\mathrm{tr}\left({K}^TK\right)\\ {}& =\mathrm{tr}\left(\left({K^{\circ}}^T+{K}_p^T\right)\left({K}^{\circ }+{K}_p\right)\right)\\ {}& ={\left(\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}\right)}^2\mathrm{tr}\left({v}_1{v}_1^TB{B}^T{v}_1{v}_1^T\right)-2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}\mathrm{tr}\left({v}_1{v}_1^TB{K}_p\right)+\mathrm{tr}\left({K}_p^T{K}_p\right)\\ {}& ={\left(\frac{\lambda_{\mathrm{shift}}}{\left\Vert {v}_1^TB\right\Vert}\right)}^2-2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}\mathrm{tr}\left({v}_1{v}_1^TB{K}_p\right)+{\left\Vert {K}_p\right\Vert}_F^2.\end{array}} $$ (48)
By using (39) and (40), the second term of the righthand side of (48) becomes
2 λ shift v 1 T B 2 tr ( v 1 v 1 T B K p ) = 2 λ shift v 1 T B 2 tr ( v 1 v 1 T V ( Λ ( λ 1 λ shift ) I ) V T v ^ p v 1 T ) = 2 λ shift v 1 T B 2 tr ( v 1 e 1 T ( Λ ( λ 1 λ shift ) I ) V T v ^ p v 1 T ) = 2 λ shift 2 v 1 T B 2 tr ( v 1 e 1 T V T v ^ p v 1 T ) = 2 λ shift 2 v 1 T B 2 tr ( v 1 v 1 T v ^ p v 1 T ) = 0 . $$ {\displaystyle \begin{array}{ll}-2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}\mathrm{tr}\left({v}_1{v}_1^TB{K}_p\right)& =2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}\mathrm{tr}\left({v}_1{v}_1^TV\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T{\hat{v}}_p{v}_1^T\right)\\ {}& =2\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}\mathrm{tr}\left({v}_1{e}_1^T\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right){V}^T{\hat{v}}_p{v}_1^T\right)\\ {}& =2\frac{\lambda_{\mathrm{shift}}^2}{{\left\Vert {v}_1^TB\right\Vert}^2}\mathrm{tr}\left({v}_1{e}_1^T{V}^T{\hat{v}}_p{v}_1^T\right)\\ {}& =2\frac{\lambda_{\mathrm{shift}}^2}{{\left\Vert {v}_1^TB\right\Vert}^2}\mathrm{tr}\left({v}_1{v}_1^T{\hat{v}}_p{v}_1^T\right)\\ {}& =0.\end{array}} $$ (49)
Thus, the cost function J 2 $$ {J}_2 $$ is written as follows:
J 2 = K F = λ shift v 1 T B 2 + K p F 2 . $$ {\displaystyle \begin{array}{ll}{J}_2& ={\left\Vert K\right\Vert}_F\\ {}& =\sqrt{{\left(\frac{\lambda_{\mathrm{shift}}}{\left\Vert {v}_1^TB\right\Vert}\right)}^2+{\left\Vert {K}_p\right\Vert}_F^2}.\end{array}} $$ (50)
Since the second term of righthand side of (50) is non-negative, K p = O $$ {K}_p=O $$ minimizes J 2 $$ {J}_2 $$ . Therefore, K = K $$ K={K}^{\circ } $$ is the optimal solution of the subproblem (P6), as well as the subproblem (P4). Then, the optimization problem (P3) is equivalent to the following optimization problem:
variables B , K minimize J 1 = | λ shift | v 1 T B subject to B , K = λ shift v 1 T B 2 B T v 1 v 1 T . $$ \left|\kern1em \begin{array}{cccc}& \mathrm{variables}& & B,K\\ {}& \operatorname{minimize}& & {J}_1\left(=\frac{\mid {\lambda}_{\mathrm{shift}}\mid }{\left\Vert {v}_1^TB\right\Vert}\right)\\ {}& \mathrm{subject}\kern0.3em \mathrm{to}& & B\in \mathcal{B}, \\ {}& & & K=-\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^TB\right\Vert}^2}{B}^T{v}_1{v}_1^T.\end{array}\right. $$ (P7)
Clearly, the optimal solution of (P7) is written as (46) and (47). From the above, ( B , K ) $$ \left({B}^{\ast },{K}^{\ast}\right) $$ is the optimal solution of the optimization problem (P1) and the optimization problem in Problem 2.

Theorem 1 shows that the solution of Problems 1 and 2 can be achieved in the same way only with the knowledge of the dominant eigenvector v 1 $$ {v}_1 $$ . Note that Theorem 1 holds only under the ideal case where v 1 $$ {v}_1 $$ is accurately obtained. For implementation, instead of v 1 $$ {v}_1 $$ , we use the dominant eigenvector v ˜ 1 $$ {\tilde{v}}_1 $$ of the sample covariance matrix C ˜ $$ \tilde{C} $$ of HDLSS data of x $$ x $$ , which is an estimated vector of v 1 $$ {v}_1 $$ when the system is under a pre-deteriorating stage. Namely, we implement the estimated optimal re-stabilization with the estimated optimal input assignment and the estimated optimal feedback gain, which are designed by replacing v 1 $$ {v}_1 $$ with v ˜ 1 $$ {\tilde{v}}_1 $$ in (9) and (10). The effectiveness of the estimated optimal design method is supported by numerical simulations in Section 4.

In the case of controlling network systems, some sparseness of input assignment is often required. For example, we consider the following feasible set $$ \mathcal{B} $$ of input assignment B $$ B $$ where B $$ B $$ is restricted to be sparse:
= B = b 1 b m | b i { e 1 , , e n } ( i = 1 , , m ) , b i b j ( i j ) , $$ \mathcal{B} =\left\{B=\left[{b}_1\kern0.5em \cdots \kern0.5em {b}_m\right]|{b}_i\in \left\{{e}_1,\dots, {e}_n\right\}\kern0.3em \Big(i=1,\dots, m\Big),\kern0.3em {b}_i\ne {b}_j\kern0.3em \left(i\ne j\right)\right\}, $$ (51)
where e i $$ {e}_i $$ is the i $$ i $$ th standard unit vector, that is, the i $$ i $$ th element of e i $$ {e}_i $$ is one and others are zero. This constraint means that the network has only one input port with respect to each input in the input vector u $$ u $$ , that is, the number of input ports is m $$ m $$ . Then, by using Theorem 1, the following corollary holds:

Corollary 1.For the system (4), suppose Assumptions 1 and 2 hold. If the feasible set $$ \mathcal{B} $$ is given as (51), the optimal solution ( B , K ) $$ \left({B}^{\ast },{K}^{\ast}\right) $$ of the optimization problem (P1) is given as follows:

B = e k 1 e k m , k i = arg max k { 1 , , n } I i 1 | v 1 ( k ) | , I i = { k 1 , , k i } , I 0 = ( i = 1 , , n ) , $$ {B}^{\ast}\kern0.5em =\left[\begin{array}{ccc}{e}_{k_1}& \cdots & {e}_{k_m}\end{array}\right],\kern1em {k}_i=\underset{k\in \left\{1,\dots, n\right\}\setminus {I}_{i-1}}{\arg\;\max}\mid {v}_1(k)\mid, \kern1em {I}_i=\left\{{k}_1,\dots, {k}_i\right\},\kern1em {I}_0=\varnothing \kern1em \left(i=1,\dots, n\right), $$ (52)
K = λ shift k I m v 1 ( k ) 2 B T v 1 v 1 T . $$ {K}^{\ast}\kern0.5em =-\frac{\lambda_{\mathrm{shift}}}{\sum_{k\in {I}_m}{\left({v}_1(k)\right)}^2}{B^{\ast}}^T{v}_1{v}_1^T. $$ (53)

Corollary 1 implies that, under the constraint (51), the optimal input ports are the nodes corresponding to the indices of the top m $$ m $$ elements in absolute value of the dominant eigenvector v 1 $$ {v}_1 $$ .

Proof of Corollary 1.Let l i $$ {l}_i $$ be the index of input port designated by b i $$ {b}_i $$ , that is, b i = e l i $$ {b}_i={e}_{l_i} $$ . Then, by using (9) in Theorem 1, the optimal input assignment is given as follows:

B = arg max { l 1 , , l m } { 1 , , n } v 1 ( l 1 ) v 1 ( l m ) , = arg max { l 1 , , l m } { 1 , , n } i = 1 m v 1 ( l i ) 2 = arg max { l 1 , , l m } { 1 , , n } i = 1 m v 1 ( l i ) . $$ {\displaystyle \begin{array}{ll}{B}^{\ast }& =\underset{\left\{{l}_1,\dots, {l}_m\right\}\subset \left\{1,\dots, n\right\}}{\arg\;\max}\left\Vert \left[{v}_1\left({l}_1\right)\kern0.5em \cdots \kern0.5em {v}_1\left({l}_m\right)\right]\right\Vert, \\ {}& =\underset{\left\{{l}_1,\dots, {l}_m\right\}\subset \left\{1,\dots, n\right\}}{\arg\;\max}\kern0.2em \sum \limits_{i=1}^m{\left({v}_1\left({l}_i\right)\right)}^2\\ {}& =\underset{\left\{{l}_1,\dots, {l}_m\right\}\subset \left\{1,\dots, n\right\}}{\arg\;\max}\kern0.2em \sum \limits_{i=1}^m\left|{v}_1\left({l}_i\right)\right|.\end{array}} $$ (54)
Note that | v 1 ( k 1 ) | | v 1 ( k 2 ) | | v 1 ( k n ) | $$ \mid {v}_1\left({k}_1\right)\mid \ge \mid {v}_1\left({k}_2\right)\mid \ge \cdots \ge \mid {v}_1\left({k}_n\right)\mid $$ . Thus, we obtain
i = 1 m v 1 ( l i ) | v 1 ( k 1 ) | + i = 2 m | v 1 ( l i ) | | v 1 ( k 1 ) | + | v 1 ( k 2 ) | + i = 3 m | v 1 ( l i ) | | v 1 ( k 1 ) | + + | v 1 ( k m ) | = i = 1 m | v 1 ( k i ) | , $$ {\displaystyle \begin{array}{ll}\sum \limits_{i=1}^m\left|{v}_1\left({l}_i\right)\right|& \le \mid {v}_1\left({k}_1\right)\mid +\sum \limits_{i=2}^m\mid {v}_1\left({l}_i\right)\mid \\ {}& \le \mid {v}_1\left({k}_1\right)\mid +\mid {v}_1\left({k}_2\right)\mid +\sum \limits_{i=3}^m\mid {v}_1\left({l}_i\right)\mid \\ {}& \le \cdots \le \mid {v}_1\left({k}_1\right)\mid +\cdots +\mid {v}_1\left({k}_m\right)\mid =\sum \limits_{i=1}^m\mid {v}_1\left({k}_i\right)\mid, \end{array}} $$ (55)
where the equal sign holds when l i = k i ( i = 1 , , m ) $$ {l}_i={k}_i\kern0.3em \left(i=1,\dots, m\right) $$ . Therefore, we get (52). Moreover, by using (10) in Theorem 1, we immediately have (53).

3.4 Error evaluation of the practical design

In practice, it is not easy to control interactions of many nodes of a complex network system at once. Thus, not only B $$ B $$ but also K $$ K $$ should be designed to be sparse. B $$ {B}^{\ast } $$ can be sparse by using some appropriate feasible set, (51) for example, while K $$ {K}^{\ast } $$ is not always sparse because the dominant eigenvector v 1 $$ {v}_1 $$ is not always sparse. Then, we simplify the optimal feedback gain K $$ {K}^{\ast } $$ to make it sparse by setting some elements with small absolute value to zero. In this case, there is an error between K $$ {K}^{\ast } $$ and the simplified feedback gain K s $$ {K}_s^{\ast } $$ , which causes the error of the shift of the dominant eigenvalue.

Now, we analyze the error of the dominant eigenvalue of the closed-loop system caused by using ( B , K s ) $$ \left({B}^{\ast },{K}_s^{\ast}\right) $$ under the constraint (51). Here, we define the simplified feedback gain K s $$ {K}_s^{\ast } $$ as follows:
K s : = K M ( l ) , $$ {K}_s^{\ast}:= {K}^{\ast }M(l), $$ (56)
where l { 1 , , n } $$ l\in \left\{1,\dots, n\right\} $$ and M ( l ) $$ M(l) $$ is a square matrix such that
M ( l ) e k = e k ( k I l ) 0 ( k I l ) ( k = 1 , , n ) , $$ M(l){e}_k=\left\{\begin{array}{cc}{e}_k\kern1em & \left(k\in {I}_l\right)\\ {}0\kern1em & \left(k\notin {I}_l\right)\end{array}\right.\kern1em \left(k=1,\dots, n\right), $$ (57)
and I l $$ {I}_l $$ is defined as (52). In this expression, each row of K s $$ {K}_s^{\ast } $$ has only l $$ l $$ non-zero elements, which correspond to the top l $$ l $$ elements in absolute value with respect to each row of K $$ {K}^{\ast } $$ . Then, the error is evaluated as the following theorem:

Theorem 2.Let λ ^ 1 ( B , K s ) $$ {\hat{\lambda}}_1\left({B}^{\ast },{K}_s^{\ast}\right) $$ be the dominant eigenvalue of A + B K s $$ A+{B}^{\ast }{K}_s^{\ast } $$ and λ ^ 1 ( 1 ) ( B , K s ) $$ {\hat{\lambda}}_1^{(1)}\left({B}^{\ast },{K}_s^{\ast}\right) $$ be the first-order approximation of λ ^ 1 ( B , K s ) $$ {\hat{\lambda}}_1\left({B}^{\ast },{K}_s^{\ast}\right) $$ with respect to B K s B K F ( = : ϵ ) $$ {\left\Vert {B}^{\ast }{K}_s^{\ast }-{B}^{\ast }{K}^{\ast}\right\Vert}_F\left(=: \epsilon \right) $$ . Then, under the constraint (51), the following holds:

λ ^ 1 ( B , K s ) λ ^ 1 ( 1 ) ( B , K s ) [ λ 1 λ shift λ error , λ 1 λ shift + λ error ] , $$ {\hat{\lambda}}_1\left({B}^{\ast },{K}_s^{\ast}\right)\approx {\hat{\lambda}}_1^{(1)}\left({B}^{\ast },{K}_s^{\ast}\right)\in \left[{\lambda}_1-{\lambda}_{\mathrm{shift}}-{\lambda}_{\mathrm{error}},{\lambda}_1-{\lambda}_{\mathrm{shift}}+{\lambda}_{\mathrm{error}}\right], $$ (58)
where
λ error = λ shift i = 1 n λ shift λ i ( λ 1 λ shift ) 2 k I l v 1 ( k ) 2 k I m v 1 ( k ) 2 . $$ {\lambda}_{\mathrm{error}}={\lambda}_{\mathrm{shift}}\sqrt{\sum \limits_{i=1}^n{\left(\frac{\lambda_{\mathrm{shift}}}{\lambda_i-\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)}\right)}^2\frac{\sum_{k\notin {I}_l}{\left({v}_1(k)\right)}^2}{\sum_{k\in {I}_m}{\left({v}_1(k)\right)}^2}}. $$ (59)

Theorem 2 implies that the difference between λ ^ 1 ( B , K s ) $$ {\hat{\lambda}}_1\left({B}^{\ast },{K}_s^{\ast}\right) $$ and λ 1 λ shift $$ {\lambda}_1-{\lambda}_{\mathrm{shift}} $$ is less than or equal to λ error $$ {\lambda}_{\mathrm{error}} $$ if B K s B K F $$ {\left\Vert {B}^{\ast }{K}_s^{\ast }-{B}^{\ast }{K}^{\ast}\right\Vert}_F $$ is sufficiently small. In addition, λ error $$ {\lambda}_{\mathrm{error}} $$ becomes small as m $$ m $$ or l $$ l $$ increases.

Proof of Theorem 2.The system matrix of the closed-loop system with ( B , K s ) $$ \left({B}^{\ast },{K}_s^{\ast}\right) $$ is written as follows:

 ( B , K s ) = A + B K s = ( A + B K ) + ( B K s B K ) =  + ϵ Q , $$ {\displaystyle \begin{array}{ll}\hat{A}\left({B}^{\ast },{K}_s^{\ast}\right)& =A+{B}^{\ast }{K}_s^{\ast}\\ {}& =\left(A+{B}^{\ast }{K}^{\ast}\right)+\left({B}^{\ast }{K}_s^{\ast }-{B}^{\ast }{K}^{\ast}\right)\\ {}& ={\hat{A}}^{\ast }+\epsilon Q,\end{array}} $$ (60)
where  = A + B K $$ {\hat{A}}^{\ast }=A+{B}^{\ast }{K}^{\ast } $$ , ϵ = B K s B K F $$ \epsilon ={\left\Vert {B}^{\ast }{K}_s^{\ast }-{B}^{\ast }{K}^{\ast}\right\Vert}_F $$ and Q = ( B K s B K ) / ϵ $$ Q=\left({B}^{\ast }{K}_s^{\ast }-{B}^{\ast }{K}^{\ast}\right)/\epsilon $$ . Then, we rewrite  ( B , K s ) $$ \hat{A}\left({B}^{\ast },{K}_s^{\ast}\right) $$ as  ( ϵ ) $$ \hat{A}\left(\epsilon \right) $$ and λ ^ 1 ( B , K s ) $$ {\hat{\lambda}}_1\left({B}^{\ast },{K}_s^{\ast}\right) $$ as λ ^ 1 ( ϵ ) $$ {\hat{\lambda}}_1\left(\epsilon \right) $$ . Now, there exists a vector v ^ 1 ( ϵ ) $$ {\hat{v}}_1\left(\epsilon \right) $$ such that
 ( ϵ ) v ^ 1 ( ϵ ) = λ ^ 1 ( ϵ ) v ^ 1 ( ϵ ) . $$ \hat{A}\left(\epsilon \right){\hat{v}}_1\left(\epsilon \right)={\hat{\lambda}}_1\left(\epsilon \right){\hat{v}}_1\left(\epsilon \right). $$ (61)
By differentiating both sides by ϵ $$ \epsilon $$ and substituting ϵ = 0 $$ \epsilon =0 $$ , we obtain
Q v ^ 1 + Â v ^ 1 ( ϵ ) ϵ ϵ = 0 = v ^ 1 λ ^ 1 ( ϵ ) ϵ ϵ = 0 + λ ^ 1 v ^ 1 ( ϵ ) ϵ ϵ = 0 , $$ Q{\hat{v}}_1^{\ast }+{\hat{A}}^{\ast }{\left.\frac{\partial {\hat{v}}_1\left(\epsilon \right)}{\partial \epsilon}\right|}_{\epsilon =0}={\hat{v}}_1^{\ast }{\left.\frac{\partial {\hat{\lambda}}_1\left(\epsilon \right)}{\partial \epsilon}\right|}_{\epsilon =0}+{\hat{\lambda}}_1{\left.\frac{\partial {\hat{v}}_1\left(\epsilon \right)}{\partial \epsilon}\right|}_{\epsilon =0}, $$ (62)
where λ ^ 1 ( = λ 1 λ shift ) $$ {\hat{\lambda}}_1\left(={\lambda}_1-{\lambda}_{\mathrm{shift}}\right) $$ is the dominant eigenvalue of  $$ {\hat{A}}^{\ast } $$ and v ^ 1 $$ {\hat{v}}_1^{\ast } $$ is the dominant eigenvector. By left-multiplying the left eigenvector ŵ 1 T $$ {{\hat{w}}_1^{\ast}}^T $$ corresponding to λ ^ 1 $$ {\hat{\lambda}}_1 $$ , we get
λ ^ 1 ( ϵ ) ϵ ϵ = 0 = ŵ 1 T Q v ^ 1 . $$ {\left.\frac{\partial {\hat{\lambda}}_1\left(\epsilon \right)}{\partial \epsilon}\right|}_{\epsilon =0}={{\hat{w}}_1^{\ast}}^TQ{\hat{v}}_1^{\ast }. $$ (63)
Therefore, the first-order approximation λ ^ 1 ( 1 ) ( ϵ ) $$ {\hat{\lambda}}_1^{(1)}\left(\epsilon \right) $$ of λ ^ 1 ( ϵ ) $$ {\hat{\lambda}}_1\left(\epsilon \right) $$ is
λ ^ 1 ( ϵ ) λ ^ 1 ( 1 ) ( ϵ ) = λ ^ 1 + ϵ λ ^ 1 ( ϵ ) ϵ ϵ = 0 = λ 1 λ shift + ŵ 1 T ϵ Q v ^ 1 = λ 1 λ shift + ŵ 1 T ( B K s B K ) v ^ 1 . $$ {\displaystyle \begin{array}{ll}{\hat{\lambda}}_1\left(\epsilon \right)& \approx {\hat{\lambda}}_1^{(1)}\left(\epsilon \right)\\ {}& ={\hat{\lambda}}_1+\epsilon {\left.\frac{\partial {\hat{\lambda}}_1\left(\epsilon \right)}{\partial \epsilon}\right|}_{\epsilon =0}\\ {}& ={\lambda}_1-{\lambda}_{\mathrm{shift}}+{{\hat{w}}_1^{\ast}}^T\epsilon Q{\hat{v}}_1^{\ast}\\ {}& ={\lambda}_1-{\lambda}_{\mathrm{shift}}+{{\hat{w}}_1^{\ast}}^T\left({B}^{\ast }{K}_s^{\ast }-{B}^{\ast }{K}^{\ast}\right){\hat{v}}_1^{\ast }.\end{array}} $$ (64)
According to the proof of Theorem 1, the following hold:
v ^ 1 = λ shift v 1 T B 2 V ( Λ ( λ 1 λ shift ) I ) 1 V T B B T v 1 , $$ {\hat{v}}_1^{\ast}\kern0.5em =\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^T{B}^{\ast}\right\Vert}^2}V{\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}{V}^T{B}^{\ast }{B^{\ast}}^T{v}_1, $$ (65)
ŵ 1 = v 1 . $$ {\hat{w}}_1^{\ast}\kern0.5em ={v}_1. $$ (66)
Then, by using (52), (53), and (56), we have
ŵ 1 T ( B K s B K ) v ^ 1 = v 1 T B K ( M ( l ) I ) λ shift v 1 T B 2 V ( Λ ( λ 1 λ shift ) I ) 1 V T B B T v 1 = λ shift 2 v 1 T B 4 v 1 T B ( B ) T v 1 v 1 T ( I M ( l ) ) V ( Λ ( λ 1 λ shift ) I ) 1 V T B B T v 1 = λ shift 2 v 1 T B 2 v 1 T ( I M ( l ) ) V ( Λ ( λ 1 λ shift ) I ) 1 V T B B T v 1 = λ shift 2 k I m v 1 ( k ) 2 v 1 T ( I M ( l ) ) V ( Λ ( λ 1 λ shift ) I ) 1 V T B B T v 1 . $$ {\displaystyle \begin{array}{ll}{{\hat{w}}_1^{\ast}}^T\left({B}^{\ast }{K}_s^{\ast }-{B}^{\ast }{K}^{\ast}\right){\hat{v}}_1^{\ast }& ={v}_1^T{B}^{\ast }{K}^{\ast}\left(M(l)-I\right)\frac{\lambda_{\mathrm{shift}}}{{\left\Vert {v}_1^T{B}^{\ast}\right\Vert}^2}V{\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}{V}^T{B}^{\ast }{B^{\ast}}^T{v}_1\\ {}& =\frac{\lambda_{\mathrm{shift}}^2}{{\left\Vert {v}_1^T{B}^{\ast}\right\Vert}^4}{v}_1^T{B}^{\ast }{\left({B}^{\ast}\right)}^T{v}_1{v}_1^T\left(I-M(l)\right)V{\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}{V}^T{B}^{\ast }{B^{\ast}}^T{v}_1\\ {}& =\frac{\lambda_{\mathrm{shift}}^2}{{\left\Vert {v}_1^T{B}^{\ast}\right\Vert}^2}{v}_1^T\left(I-M(l)\right)V{\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}{V}^T{B}^{\ast }{B^{\ast}}^T{v}_1\\ {}& =\frac{\lambda_{\mathrm{shift}}^2}{\sum_{k\in {I}_m}{\left({v}_1(k)\right)}^2}{v}_1^T\left(I-M(l)\right)V{\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}{V}^T{B}^{\ast }{B^{\ast}}^T{v}_1.\end{array}} $$ (67)
Then, the Frobenius norm of ŵ 1 T ( B K s B K ) v ^ 1 $$ {{\hat{w}}_1^{\ast}}^T\left({B}^{\ast }{K}_s^{\ast }-{B}^{\ast }{K}^{\ast}\right){\hat{v}}_1^{\ast } $$ satisfies
ŵ 1 T ( B K s B K ) v ^ 1 F = λ shift 2 k I m v 1 ( k ) 2 v 1 T ( I M ( l ) ) V ( Λ ( λ 1 λ shift ) I ) 1 V T B B T v 1 F λ shift 2 k I m v 1 ( k ) 2 v 1 T ( I M ( l ) ) F V ( Λ ( λ 1 λ shift ) I ) 1 V T F B B T v 1 F = λ shift 2 k I m v 1 ( k ) 2 ( Λ ( λ 1 λ shift ) I ) 1 F i I l v 1 ( k ) 2 k I m v 1 ( k ) 2 = λ shift i = 1 n λ shift λ i ( λ 1 λ shift ) 2 i I l v 1 ( k ) 2 k I m v 1 ( k ) 2 . $$ {\displaystyle \begin{array}{ll}{\left\Vert {{\hat{w}}_1^{\ast}}^T\left({B}^{\ast }{K}_s^{\ast }-{B}^{\ast }{K}^{\ast}\right){\hat{v}}_1^{\ast}\right\Vert}_F& =\frac{\lambda_{\mathrm{shift}}^2}{\sum_{k\in {I}_m}{\left({v}_1(k)\right)}^2}{\left\Vert {v}_1^T\left(I-M(l)\right)V{\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}{V}^T{B}^{\ast }{B^{\ast}}^T{v}_1\right\Vert}_F\\ {}& \le \frac{\lambda_{\mathrm{shift}}^2}{\sum_{k\in {I}_m}{\left({v}_1(k)\right)}^2}{\left\Vert {v}_1^T\left(I-M(l)\right)\right\Vert}_F{\left\Vert V{\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}{V}^T\right\Vert}_F{\left\Vert {B}^{\ast }{B^{\ast}}^T{v}_1\right\Vert}_F\\ {}& =\frac{\lambda_{\mathrm{shift}}^2}{\sum_{k\in {I}_m}{\left({v}_1(k)\right)}^2}{\left\Vert {\left(\Lambda -\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)I\right)}^{-1}\right\Vert}_F\sqrt{\sum \limits_{i\notin {I}_l}{\left({v}_1(k)\right)}^2\sum \limits_{k\in {I}_m}{\left({v}_1(k)\right)}^2}\\ {}& ={\lambda}_{\mathrm{shift}}\sqrt{\sum \limits_{i=1}^n{\left(\frac{\lambda_{\mathrm{shift}}}{\lambda_i-\left({\lambda}_1-{\lambda}_{\mathrm{shift}}\right)}\right)}^2\frac{\sum_{i\notin {I}_l}{\left({v}_1(k)\right)}^2}{\sum_{k\in {I}_m}{\left({v}_1(k)\right)}^2}}.\end{array}} $$ (68)
Hence, (58) holds.

3.5 Proposed algorithm

The algorithm for designing the input assignment B $$ B $$ and the feedback gain K $$ K $$ is illustrated in Figure 2. The process is summarized as follows:
  1. Obtain HDLSS data { x ( t 1 ) , x ( t 2 ) , , x ( t N ) } $$ \left\{x\left({t}_1\right),x\left({t}_2\right),\dots, x\left({t}_N\right)\right\} $$ from a network system under a pre-deteriorating stage;
  2. Compute the sample covariance matrix C ˜ $$ \tilde{C} $$ of x $$ x $$ from the HDLSS data;
  3. Obtain the dominant eigenvector v ˜ 1 $$ {\tilde{v}}_1 $$ of C ˜ $$ \tilde{C} $$ ;
  4. Estimate the input assignment B $$ B $$ and the feedback gain K $$ K $$ according to Theorem 1, Corollary 1, or Theorem 2, where the dominant eigenvector v 1 $$ {v}_1 $$ of A $$ A $$ is replaced with v ˜ 1 $$ {\tilde{v}}_1 $$ .
Details are in the caption following the image
Framework of the proposed algorithm.

4 NUMERICAL SIMULATIONS

In this section, we give numerical simulations to examine the effectiveness of the proposed method for re-stabilization of complex network systems. We use a mathematical model where the true values of the parameters are known so that we compare the results of re-stabilization using the true dominant eigenvector and using the estimated dominant eigenvector based on the observed HDLSS data of the state.

4.1 Simulation model

We use an undirected network system, called Network May Model, introduced by Nakagawa et al.6 and Matsumori et al.7 The dynamics is described as follows:
z ˙ = F ( z ) + B u + ω , F ( z ) = f 1 ( z ) f n ( z ) T , f i ( z ) = r i z i 1 z i L c z i 2 z i 2 + h 2 + a j = 1 n s i j ( z j z i ) , r i = r 0 H δ i ( i = 1 , , n ) , $$ {\displaystyle \begin{array}{ll}\dot{z}& =F(z)+ Bu+\omega, \\ {}F(z)& ={\left[{f}_1(z)\kern0.5em \cdots \kern0.5em {f}_n(z)\right]}^T,\\ {}{f}_i(z)& ={r}_i{z}_i\left(1-\frac{z_i}{L}\right)-\frac{c{z}_i^2}{z_i^2+{h}^2}+a\sum \limits_{j=1}^n{s}_{ij}\left({z}_j-{z}_i\right),\\ {}{r}_i& ={r}_0-H{\delta}_i\kern1em \left(i=1,\dots, n\right),\end{array}} $$ (69)
where the network has n $$ n $$ nodes, z = [ z 1 , , z n ] T $$ z={\left[{z}_1,\dots, {z}_n\right]}^T $$ is the state variables, ω n $$ \omega \in {\mathbb{R}}^n $$ is a white Gaussian noise with zero means and covariance matrix D = σ 2 I $$ D={\sigma}^2I $$ with a standard deviation σ $$ \sigma $$ , and L , h , a , r 0 , H $$ L,h,a,{r}_0,H\in \mathbb{R} $$ are constant parameters. Furthermore, s i j $$ {s}_{ij} $$ is an element of an adjacency matrix, that is, s i j $$ {s}_{ij} $$ is one if node i $$ i $$ and node j $$ j $$ are connected and is otherwise zero. Here, δ i $$ {\delta}_i\in \mathbb{R} $$ is a uniform random number in the interval [ 1 , 1 ] $$ \left[-1,1\right] $$ , which means r i $$ {r}_i $$ is a parameter randomly determined in the interval [ r 0 H , r 0 + H ] $$ \left[{r}_0-H,{r}_0+H\right] $$ with respect to each node. In addition, c $$ c\in \mathbb{R} $$ is a bifurcation parameter, which causes a saddle-node bifurcation at a certain number.

4.2 Parameter setting

We use Watts-Strogatz model24 for the network structure, which determines the value of s i j $$ {s}_{ij} $$ , with n = 100 $$ n=100 $$ nodes, the average degree k = 4 $$ k=4 $$ , and the edge rewiring probability β = 0 . 04 $$ \beta =0.04 $$ . Figure 3 shows a network structure in this case.

Details are in the caption following the image
Network structure.

The constant parameters are given as r 0 = 1 . 0 $$ {r}_0=1.0 $$ , L = 10 $$ L=10 $$ , h = 1 . 0 $$ h=1.0 $$ , a = 0 . 1 $$ a=0.1 $$ , H = 0 . 1 $$ H=0.1 $$ , σ = 0 . 001 $$ \sigma =0.001 $$ . We solve the model (69) by using the Euler–Maruyama method.25 Here, the time step width is Δ t = 0 . 01 $$ \Delta t=0.01 $$ , the number of steps is T = 1 . 0 × 1 0 5 $$ T=1.0\times 1{0}^5 $$ , and the initial state z 0 $$ {z}_0 $$ is one of the stable equilibrium point of the state. We use the convergence value of the noiseless system with no input of (69) as the equilibrium point. With these settings and some random parameter δ i $$ {\delta}_i $$ , the bifurcation occurs when c $$ c $$ is at 2 . 5418 $$ 2.5418 $$ (see Figure 4). Thus, we set c = 2 . 5417 $$ c=2.5417 $$ so that the network system is at a pre-deteriorating stage and satisfies Assumptions 1 and 2. The observation data for the sample covariance matrix are sampled from the calculation results of the numerical model at the interval of T / N $$ T/N $$ steps, where N $$ N $$ is the sample size. This calculation iterates 10 $$ 10 $$ times while changing the values of random noise. The system matrix A $$ A $$ is obtained by linearly approximating the model (69) in the neighborhood of the equilibrium point. The constraint on the input assignment B $$ B $$ is (51) with m = 2 $$ m=2 $$ , that is, the number of input port is 2 $$ 2 $$ . In addition, we set λ shift = 0 . 05 $$ {\lambda}_{\mathrm{shift}}=0.05 $$ , which is used for design of the feedback gain.

Details are in the caption following the image
State trajectory of the system without state feedback when the bifurcation parameter c $$ c $$ changes. The parameter c $$ c $$ is set to 2 . 5417 $$ 2.5417 $$ from t = 0 $$ t=0 $$ to t = T Δ t $$ t=T\Delta t $$ and 2 . 5418 $$ 2.5418 $$ from t = ( T + 1 ) Δ t $$ t=\left(T+1\right)\Delta t $$ to t = 2 T Δ t $$ t=2T\Delta t $$ .

4.3 Simulation results

Figure 5 shows the dominant eigenvector v 1 $$ {v}_1 $$ of the system matrix and the dominant eigenvector v ˜ 1 $$ {\tilde{v}}_1 $$ of the sample covariance matrix with the sample size N = 5 $$ N=5 $$ and N = 100 $$ N=100 $$ , respectively. According to Corollary 1, the optimal input assignment is B = [ e 6 e 7 ] $$ {B}^{\ast }=\left[{e}_6\kern0.3em {e}_7\right] $$ since the 6th and 7th elements are the top 2 elements in absolute value of v 1 $$ {v}_1 $$ (see Figure 5A). In the case of v ˜ 1 $$ {\tilde{v}}_1 $$ with N = 5 $$ N=5 $$ , the 6th and 7th elements are the top 2 elements in absolute value for 9 out of the 10 samples, while the 6th and 4th are the top 2 elements in absolute value for the rest one (see Figure 5B). Thus, the estimated optimal input assignment B ˜ $$ {\tilde{B}}^{\ast } $$ is equivalent to B $$ {B}^{\ast } $$ for 9 out of the 10 samples. Although B ˜ = [ e 6 e 4 ] ( B ) $$ {\tilde{B}}^{\ast }=\left[{e}_6\kern0.3em {e}_4\right]\left(\ne {B}^{\ast}\right) $$ for the rest one sample, it is also considered effective, because the 4th element has the 3rd largest absolute value in v 1 $$ {v}_1 $$ . In the case of v ˜ 1 $$ {\tilde{v}}_1 $$ with N = 100 $$ N=100 $$ , the 6th and 7th elements are the top 2 elements in absolute value for all 10 samples (see Figure 5C), and then B ˜ = B $$ {\tilde{B}}^{\ast }={B}^{\ast } $$

Details are in the caption following the image
Plot of dominant eigenvectors. (A) The dominant eigenvector v 1 $$ {v}_1 $$ of the system matrix A $$ A $$ . (B) The dominant eigenvector v ˜ 1 $$ {\tilde{v}}_1 $$ of the sample covariance matrix with the sample size N = 5 $$ N=5 $$ for 10 samples. (C) The dominant eigenvector v ˜ 1 $$ {\tilde{v}}_1 $$ of the sample covariance matrix with the sample size N = 100 $$ N=100 $$ for 10 samples.

Figure 6 shows each row of the optimal feedback gain K $$ {K}^{\ast } $$ , which is obtained by using v 1 $$ {v}_1 $$ based on Corollary 1. Note that the feedback gain is a 2 × n $$ 2\times n $$ matrix since the system has 2 inputs. Figure 7 shows each row of the estimated optimal feedback gain K ˜ $$ {\tilde{K}}^{\ast } $$ by using v ˜ 1 $$ {\tilde{v}}_1 $$ with the sample size N = 5 $$ N=5 $$ for 10 samples. Figure 8 shows each row of the simplified feedback gain K ˜ s $$ {\tilde{K}}_s^{\ast } $$ , which is obtained by setting all but the top 2 elements in absolute value to zero with respect to each row of K ˜ $$ {\tilde{K}}^{\ast } $$ with the sample size N = 5 $$ N=5 $$ for 10 samples. Hence, each row of K ˜ s $$ {\tilde{K}}_s^{\ast } $$ has only two non-zero elements. For 9 out of the 10 samples, the 6th and 7th elements are the non-zero elements in both rows of K ˜ s $$ {\tilde{K}}_s^{\ast } $$ , that is, the indices of the non-zero elements are equivalent to the indices of the top 2 elements in absolute value of each row of K $$ {K}^{\ast } $$ . For the rest one sample, the 4th and 6th element are the non-zero in each row.

Details are in the caption following the image
Plot of the optimal feedback gain K $$ {K}^{\ast } $$ .
Details are in the caption following the image
Plot of the estimated optimal feedback gain K ˜ $$ {\tilde{K}}^{\ast } $$ by using v ˜ 1 $$ {\tilde{v}}_1 $$ with the sample size N = 5 $$ N=5 $$ for 10 samples.
Details are in the caption following the image
Plot of the simplified feedback gain K ˜ s $$ {\tilde{K}}_s^{\ast } $$ where each row has only two non-zero elements based on K ˜ $$ {\tilde{K}}^{\ast } $$ .

Figure 9 shows the dominant eigenvalue of the system matrix for four cases: just before bifurcation without any input, using the optimal state feedback with ( B , K ) $$ \left({B}^{\ast },{K}^{\ast}\right) $$ , using the estimated optimal state feedback with ( B ˜ , K ˜ ) $$ \left({\tilde{B}}^{\ast },{\tilde{K}}^{\ast}\right) $$ with the sample size N = 5 $$ N=5 $$ for 10 samples, and using the simplified state feedback with ( B ˜ , K ˜ s ) $$ \left({\tilde{B}}^{\ast },{\tilde{K}}_s^{\ast}\right) $$ with the sample size N = 5 $$ N=5 $$ for 10 samples. In the case of using any state feedback, the dominant eigenvalue is shifted away from zero, that is, the system is re-stabilized. Note that, in the case of using the simplified state feedback, the dominant eigenvalue that is closest to zero corresponds to the case of using the sample with which B ˜ = [ e 6 e 4 ] $$ {\tilde{B}}^{\ast }=\left[{e}_6\kern0.3em {e}_4\right] $$ .

Details are in the caption following the image
Dominant eigenvalue of the system matrix before and after using state feedback.

Figure 10 shows the state trajectory of the system with the simplified state feedback corresponding to the case where the dominant eigenvalue is closest to zero. Here, the bifurcation parameter c $$ c $$ slightly changes after t = T Δ t ( = 1000 ) $$ t=T\Delta t\left(=1000\right) $$ . Even though c $$ c $$ changes, the equilibrium point does not change, that is, a bifurcation does not occur. Therefore, the network system is prevented from falling into a deteriorating stage.

Details are in the caption following the image
State trajectory of the system with the simplified state feedback when the bifurcation parameter c $$ c $$ changes. c $$ c $$ is set to 2 . 5417 $$ 2.5417 $$ from t = 0 $$ t=0 $$ to t = T Δ t $$ t=T\Delta t $$ and 2 . 5418 $$ 2.5418 $$ from t = ( T + 1 ) Δ t $$ t=\left(T+1\right)\Delta t $$ to t = 2 T Δ t $$ t=2T\Delta t $$ .

From the above results, the simplified design method of the input assignment and feedback gain by using the dominant eigenvector of the sample covariance matrix is practically effective, although the change of the eigenvalues other than the dominant eigenvalue due to the simplification should be theoretically discussed.

5 CONCLUSIONS

In this article, we have proposed a method for designing the optimal input assignment and feedback gain for re-stabilizing an undirected network system based on pole placement under the condition that the system matrix is unknown and only HDLSS data of the state is available. The error evaluation of the simplification of the optimal design has also been presented. Furthermore, the numerical simulations have showed the effectiveness of the simplified design method.

This article assumes that the network of the system is undirected and the eigenvalues of the system matrix are distinct. Since a complex network system is not always undirected and the eigenvalues are not always distinct, we need to extend our method to the case where the network is directed and some eigenvalues are repeated. In addition, more detailed error analysis of the eigenvalue shift should be addressed.

CONFLICT OF INTEREST STATEMENT

The authors declare that they have no conflicts of interest.

FUNDING INFORMATION

This research was supported by The Moonshot Research and Development Program JPMJMS2021.

DATA AVAILABILITY STATEMENT

The data that support the findings of this study are available from the corresponding author upon reasonable request.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.