Linear Systems of Ordinary Differential Equations with Constant Coefficients

Linear Systems of Ordinary Differential Equations with Constant Coefficients #

Introduction #

Probably the most thoroughly studied set of math models are linear systems of differential equations with constant coefficients (LSDECCs). While most of the world is non-linear, when a linear model is applicable, we should be able to take as great advantage of it as possible. This set of notes walks through the practical aspects of solving linear systems of differential equations with constant coefficients. The emphasis is upon how to solve these problems and in that spirit, proofs are left to references.

The world of homogeneous LSDECC is largely split into two categories: diagonalizable matrices and non-diagonalizable matrices. If a matrix is diagonalizable, then it has real distinct eigenvalues and a distinct eigenvector per eigenvalue. These problems are both easy to identify and simple to solve. Non-diagonalizable matrices are a little tricker to solve, but use the same intuition as the diagonalizable matrices. However, adding a forcing function to the system can quickly make these systems unsolvable. In this work, we will first address diagonalizable matrices. Then the methods will be extended to non-diagonalizable matrices and finally two techniques to handle non-homogeneous LSDECCs will be discussed.

Real Distinct Eigenvalues #

The basic idea is that we want to project the system in to a space that makes it easy to solve, then solve the easy problem and project back into our original space (see Figure 1). In “mathy” language, that means we want to project the system into a new basis (eigenspace) such that the dynamics nicely lay on the axises. So we turn to eigenvalues and eigenvectors. Things get a little tricky when eigenvalues have multiplicity or are complex. In those cases, we create a “subspace” within the eigenbasis such such that we still have orthogonality, but for now we will assume our matrices are nicely diagonalizable.

Consider the equation

x(t)=Ax(t),    x(0)=b \overline{x}'(t) = \underline{A}\overline{x}(t), \;\; \overline{x}(0) = \overline{b}

where

x(t)=[x1(t)x2(t)...xn(t)] \overline{x}(t) = \begin{bmatrix} x_1(t) \\ x_2(t) \\ ... \\ x_n(t) \end{bmatrix}

and

A=[a11a12...a1na21a22...a2n............an1an2...ann] \underline{A} = \begin{bmatrix} a_{11} & a_{12} & ... & a_{1n} \\ a_{21} & a_{22} & ... & a_{2n} \\ ... & ...& ...& ... \\ a_{n1} & a_{n2} & ... & a_{nn} \\ \end{bmatrix}

where ajiCa_{ji}\in\mathbb{C} . If the systems of equations was a scalar equation, the solution would be an exponential. In the systems of equations, that is also the case:

X(t)=eAt \underline{X}(t) = e^{\underline{A}t}

where X(t)\underline{X}(t) is called the fundamental solution matrix. Then the solution to the IVP is,

x(t)=Xb \overline{x}(t) = \underline{X}\overline{b}

Now, the question is, what is the exponential operator upon a matrix? There are many different proofs and ways of answering that question that won’t be addressed here. In the case, A\underline{A} has real distinct eigenvalues, here’s what to do.

A=SJS1J=S1AS \underline{A} = \underline{S}\underline{J}\underline{S}^{-1} \Rightarrow \underline{J} = \underline{S}^{-1}\underline{A}\underline{S}

where J=diag(λ1,λ2,...,λn)\underline{J} = \mathrm{diag}(\lambda_1, \lambda_2,..., \lambda_n) is a diagonal matrix of eigenvalues and S\underline{S} is is the related eigenbasis (i.e., every eigenvector is a column of S\underline{S} ). Thus,

X(t)=eAt=S  eJt  S1 \underline{X}(t) = e^{\underline{A}t} = \underline{S}\;e^{\underline{J}t}\;\underline{S}^{-1}

where eJt=diag(eλ1t,eλ2t,...eλnt)e^{\underline{J}t} = \mathrm{diag}(e^{\lambda_1 t}, e^{\lambda_2 t}, ... e^{\lambda_n t}) . Thus, the solution to the IVP is

x(t)=eAtb=S  eJt  S1b. \overline{x}(t) = e^{\underline{A}t}\overline{b} = \underline{S}\;e^{\underline{J}t}\;\underline{S}^{-1}\overline{b}.

Hermitian Matricies #

The derivation above assumes that A\underline{A} is diagonalizable. Unfortunately, most systems are not diagonalizable, but it is pragmatic to know when a system is guaranteed to be diagonalizable. The study of diagonalizable systems is actually spectral theory and it often focuses on self-adjoint operators. While that field is too large to cover here, a little bit is quite helpful.

Definition - Hermitian Operator The Hermitian matrix operator ( \dagger ) is the complex conjugate transpose.

H=(HT)=(H)T \underline{H}^\dagger = (\underline{H}^T)^* = (\underline{H}^*)^T

Definition - Hermitian Matrix A matrix ( H\underline{H} ) is Hermitian if it is equal to it’s Hermitian:

H=H. \underline{H} = \underline{H}^\dagger.

For example,

A=[1i23i29i33i32] \underline{A} = \begin{bmatrix} 1 & i2 & 3 \\ -i2 & -9 & -i3 \\ 3 & i3 & 2 \end{bmatrix}

is Hermitian, but

A=[i1i23i29i33i32] \underline{A} = \begin{bmatrix} i1 & i2 & 3 \\ -i2 & -9 & -i3 \\ 3 & i3 & 2 \end{bmatrix}

is not. This leads us to an important theorem.

Theorem 1.1 - A matrix, AA , is guaranteed to be diagonalizable (i.e., it has real, distinct eigenvalues) if and only if the matrix is Hermitian.

Corollary 1.1 - The eigenbasis of a Hermitian matrix is a unitary matrix.

Definition - Unitary Matrix A Unitary matrix is a matrix who’s inverse is equal to it’s transpose. That is, U\underline{U} is unitary if and only if

U1=UT \underline{U}^{-1} = \underline{U}^T

The theorem and corollary make identifying and computing the solution to diagonalizable matrices much easier. For reference, a Hermitian matrix is a self-adjoint operator. Self-adjoint operators are important in the Dirac-von Neumann formulation of quantum mechanics where observables are self-adjoint operators and in particular, the Hamiltonian operator is self-adjoint.

Alternative Formulation #

MT note: I think format is more common among physicists and engineers than mathematicians.

Sometimes, authors will move the transform, S\underline{S} , around in these equations. So it can get a little tricky and requires some care. One often useful formulation (which only works well for diagonalizable matrices) is to first realize that there are two different projections in the solution equation. First, the exponential solution in the eigenbasis is projected back into the original basis with SeJt\underline{S}e^{\underline{J}t} . Second, the initial conditions, b\overline{b} , are projected into the eigenbasis with S1b\underline{S}^{-1}\overline{b} . So, now consider setting initial conditions in the eigenbasis as c=S1b\overline{c} = \underline{S}^{-1}\overline{b} . Then, for the solution may be written as:

x(t)=c1u1eλ1t+c2u1eλ2t+...+cnuneλnt \overline{x}(t) = c_1\overline{u}_1e^{\lambda_1 t} + c_2\overline{u}_1e^{\lambda_2 t} + ... + c_n\overline{u}_ne^{\lambda_n t}

where un\overline{u}_n is the eigenvector associated with eigenvalue λn\lambda_n .

Examples #

Example 1 #

Todo

Example 2 #

Use Zero eigen value.

Real Eigenvalues with Multiplicity #

Now suppose we have real eigenvalues, but they are repeated. This means, we do not have distinct eigenvectors per eigenvalue and therefore the matrix is not diagonalizable. But never fear, we can reduce the matrix to a Jordon canonical form. With distinct eigenvalues, the idea is to project into a orthogonal space that decouples the dimensions. Here the idea is the same, except with repeated eigenvalues some dimensions are coupled so the idea is to create a subspace for each of the coupled dimensions such that within the subspace, the projection of the coupled dimensions are orthogonal. Before we discuss the particulars, we must build a little background.

Matrix Exponential #

In the previous discussion, the matrix exponential of the diagonal matrix J\underline{J} was given as: eJt=diag(eλ1t,eλ2t,...eλnt)e^{\underline{J}t} = \mathrm{diag}(e^{\lambda_1 t}, e^{\lambda_2 t}, ... e^{\lambda_n t}) . Here, we would like to give some intuition about where that comes from. Consider the scaler aa and the exponential eae^a . The exponential can be expanded with a Maclaurin series as

ea=k=0akk! e^a = \sum_{k=0}^\infty\frac{a^k}{k!}

Notice, this definition only depends upon multiplication which is well defined for square matrices. Thus, we can use the Maclaurin series to define the exponential.

Definition - Matrix Exponential Given the matrix ACn×n\underline{A} \in \mathbb{C}^{n\times n} (see note 1), the exponential of A\underline{A} is defined as

eA=k=0Akk!. e^{\underline{A}} = \sum_{k=0}^\infty\frac{\underline{A}^k}{k!}.

Notice that when A\underline{A} is diagonal, the series becomes

eA=k=0Akk!=k=01k![a10...00a2...000an]k=[k=0a1kk!0...00k=0a2kk!...000k=0ankk!]=diag(ea1,ea2,...,ean) \begin{align*} e^{\underline{A}} &= \sum_{k=0}^\infty\frac{\underline{A}^k}{k!} \\ &= \sum_{k=0}^\infty\frac{1}{k!}\begin{bmatrix} a_1 & 0 & ... &0 \\ 0 & a_2 & ... &0 \\ & & \ddots & \\ 0 & 0 & \dots & a_n \\ \end{bmatrix}^k \\ &= \begin{bmatrix} \sum_{k=0}^\infty\frac{a_1^k}{k!} & 0 & ... &0 \\ 0 &\sum_{k=0}^\infty\frac{a_2^k}{k!} & ... &0 \\ & & \ddots & \\ 0 & 0 & \dots &\sum_{k=0}^\infty\frac{a_n^k}{k!} \\ \end{bmatrix}\\ &= diag(e^{a_1}, e^{a_2}, ..., e^{a_n}) \end{align*}

as given before. However, in most cases, the summation is infinite and nasty, but there are a few properties to help out.

The matrix exponential has the following properties

  1. eAte^{\underline{A}t} is defined tR\forall t\in\mathbb{R} and eAtA  t||e^{\underline{A}t}|| \leq ||\underline{A}||\; |t| .
  2. Given AB=BA\underline{A}\underline{B} = \underline{B}\underline{A} , e(A+B)=eAeB=eBeAe^{(\underline{A}+\underline{B})} = e^{\underline{A}}e^{\underline{B}} =e^{\underline{B}}e^{\underline{A}} .
  3. eA(t+s)=eAteAs=eAseAt  t,sRe^{\underline{A}(t+s)} = e^{\underline{A}t}e^{\underline{A}s} =e^{\underline{A}s}e^{\underline{A}t}\;\forall t,s\in\mathbb{R} .
  4. eAte^{-\underline{A}t} exists for all tRt\in\mathbb{R} .
  5. ddteAt=AeAt=eAtA\frac{d}{dt}e^{\underline{A}t} = \underline{A}e^{\underline{A}t} = e^{\underline{A}t}\underline{A} .

While most of these properties are follow from the definition of the scaler exponential, the biggest difference is property (2) which requires A\underline{A} and B\underline{B} to commute. Forgetting that requirement can easily lead to the wrong answer.

Jordon Canonical Form #

Consider the equation

x(t)=Ax(t),    x(0)=b \overline{x}'(t) = \underline{A}\overline{x}(t), \;\; \overline{x}(0) = \overline{b}

where ARn×n\underline{A} \in \mathbb{R}^{n\times n} , x(t):RRn\overline{x}(t): \mathbb{R} \rightarrow \mathbb{R}^n and b\overline{b} is the initial condition. Suppose A\underline{A} has mm unique eigenvalues, λj\lambda_j , such that λj\lambda_j is repeated njn_j times. Thus, n=j=1mnjn = \sum_{j=1}^mn_j . Repeated eigenvalues means that njn_j dimensions are coupled and cannot be separated through projection. We can, however, create an orthogonal subspace to project the njn_j dimensions into. First, we must find the generaized eigenvectors. The set of njn_j generalized eigenvectors associated with eigenvalue λj\lambda_j is given by

(AλjI)uj,k=uj,k1 (\underline{A}-\lambda_j\underline{I})\overline{u}_{j,k} = \overline{u}_{j,k-1}

where uj,k=0=0\overline{u}_{j,k=0} = 0 , j=1,2,...,mj = 1,2,...,m and k=1,2,...,njk=1,2,...,n_j . The trivial case is ignored and uj,k0    k=1,2,...,m\overline{u}_{j,k}\neq0\;\forall\;k=1,2,...,m . Let Pj:n×nj\underline{P}_j: n\times n_j be the projection from the system basis into the basis formed by generalized eigenvalues associated with eigenvalue λj\lambda_j defined by

Pj=[uj,1,uj,2,...,uj,nj]. \underline{P}_j = \big[\overline{u}_{j,1}, \overline{u}_{j,2}, ..., \overline{u}_{j,n_j} \big].

Thus, we can form a generalized eigenbasis with Pj\underline{P}_j as

S=[Pj,P2,...,Pm] \underline{S}= \big[\underline{P}_j, \underline{P}_2, ..., \underline{P}_m\big]

where S\underline{S} is an an n×nn\times n matrix. Using the generalized eigenbasis, the matrix A\underline{A} can be composed of

A=SJS1J=S1AS \underline{A} = \underline{S}\underline{J}\underline{S}^{-1} \Rightarrow \underline{J} = \underline{S}^{-1}\underline{A}\underline{S}

where J=diag(B1,B2,...Bm)\underline{J} = \mathrm{diag}(\underline{B}_1, \underline{B}_2, ...\underline{B}_m) is the n×nn \times n Jordon matrix with Jordon blocks Bj:nj×nj\underline{B}_j: n_j\times n_j defined by

Bj=λjI+N=[λj10...00λj1...0...00...λj100...0λj] \underline{B}_j = \lambda_j\underline{I}+\underline{N} =\begin{bmatrix} \lambda_j & 1 & 0 &... & 0 \\ 0 &\lambda_j & 1 & ... & 0 \\ &&...& \\ 0&0&...&\lambda_j &1 \\ 0&0&...&0 & \lambda_j \end{bmatrix}

where I\underline{I} is the nj×njn_j \times n_j identity matrix and N\underline{N} is the nj×njn_j \times n_j nilpotent matrix given by,

N=[010...00001...00...000...10000...00] \underline{N} = \begin{bmatrix} 0 & 1 & 0 & ... & 0& 0 \\ 0 & 0 & 1 & ... & 0 & 0 \\ &&& ... && \\ 0 & 0 & 0 & ... & 1 & 0 \\ 0 & 0 & 0 & ... & 0 & 0 \\ \end{bmatrix}

Before continuing, let’s clarify the dimensions and indices of everything.

  • nn - the dimension of A\underline{A} and therefore J\underline{J} and S\underline{S} .
  • mm - the number of unique eigenvalues of A\underline{A} .
  • njn_j - the multiplicity of the jthj^{th} eigenvalue of A\underline{A} .
  • And n=j=1mnj.n = \sum_{j=1}^mn_j.

Further notice

  • Pj\underline{P}_j is the set of generalized eigenvectors associated with λj\lambda_j .
  • Bj\underline{B}_j is the Jordon block associated with λj\lambda_j .
  • If nj=1n_j=1 for all j=1,2,...mj=1,2,...m then the Jordon matrix of the same form as when A\underline{A} is diagonalizable.

By applying the decomposition of A\underline{A} , the fundamental solution matrix is given by

X(t)=eAt=S  eJt  S1 \underline{X}(t) = e^{\underline{A}t} = \underline{S}\;e^{\underline{J}t}\;\underline{S}^{-1}

and the particular solution is

x(t)=eAtb=S  eJt  S1b. \overline{x}(t) = e^{\underline{A}t}\overline{b} = \underline{S}\;e^{\underline{J}t}\;\underline{S}^{-1}\overline{b}.

The exponential eJt=diag(eB1t,eB2t,...,eBmt)e^{\underline{J}t} = \mathrm{diag}(e^{\underline{B}_1t},e^{\underline{B}_2t},...,e^{\underline{B}_mt}) where

eBjt=e(λjI+N)t=eλjtIeNt=eλjtl=0nj1tlNll!=eλjt[1tt22...tnj1(nj1)!01t...tnj2(nj2)!...000...1] \begin{align*} e^{\underline{B}_jt} &= e^{(\lambda_j\underline{I}+N)t} \\ &= e^{\lambda_j t\underline{I}}e^{Nt} \\ &=e^{\lambda_jt}\sum_{l=0}^{n_j-1}\frac{t^l\underline{N}^l}{l!} \\ &= e^{\lambda_jt}\begin{bmatrix} 1 & t & \frac{t^2}{2} & ... & \frac{t^{n_j-1}}{(n_j-1)!} \\ 0 & 1 & t & ... & \frac{t^{n_j-2}}{(n_j-2)!} \\ & & & ... & \\ 0 & 0 & 0 & ... & 1 \\ \end{bmatrix} \end{align*}

since λjI\lambda_j\underline{I} and N\underline{N} commute. Notice that the exponential summation is finite because N\underline{N} is nilpotent.

Whew, that is a lot. So what have we done? The matrix A\underline{A} can’t be diagonalized so we’ve created subspaces for each set of coupled dimensions using the orthogonality of polynomials which all together form a generalized eigenbasis. We then us the generalized eigenvalues to project into the generalized eigenbasis, solve the system in the generalized eigenbasis, then project back into the original basis.

Complex Eigenvalues - Real Jordon Canonical Form #

If A\underline{A} is real, then J\underline{J} and S\underline{S} may have complex entries, but any complex eigenvalues will occur in conjugate pairs; we can use this fact to simplify multiplicity of complex eigenvalues using the real Jordon canonical form.

Consider the complex eigenvalue pair λja,λjbC\lambda_{ja}, \lambda_{jb}\in\mathbb{C} such that λja=λjb=αj±iβj\lambda_{ja} = \lambda_{jb}^* = \alpha_j \pm i\beta_j with multiplicity njn_j (e.g., for nj=3n_j=3 , λja\lambda_{ja} appears three times and λjb\lambda_{jb} appears three times). Rather than using two Jordon Blocks, Bja\underline{B}_{ja} and Bjb\underline{B}_{jb} , use one block, BjReR2nj×2nj\underline{B}_j^{Re} \in \mathbb{R}^{2n_j \times 2n_j} defined as

BjRe=[ΓjI0...000ΓjI...00...000...ΓjI000...0Γj] \underline{B}_j^{Re} = \begin{bmatrix} \underline\Gamma_j & \underline{I} & 0 & ... & 0& 0 \\ 0 & \underline\Gamma_j & \underline{I} & ... & 0 & 0 \\ &&& ... & &\\ 0 & 0 & 0 & ... & \underline\Gamma_j & \underline{I} \\ 0 & 0 & 0 & ... & 0 & \underline{\Gamma}_j \\ \end{bmatrix}

where I\underline{I} is the 2×22\times2 identity matrix and

Γ=[αββα] \underline\Gamma = \begin{bmatrix} \alpha & -\beta \\ \beta & \alpha \end{bmatrix}

Then the Jordon matrix uses Bj\underline{B}_j for real eigenvalues and BjRe\underline{B}_j^{Re} for complex eigenvalues.

Examples #

Example 1 #

Solve the homogeneous systems of differential equations

{x1=5x12x2x2=2x1+x2 \begin{cases} x_1' &= 5x_1-2x_2 \\ x_2' &= 2x_1+x_2 \end{cases}

for initial conditions

x(0)=[b1b2]. \overline{x}(0) = \begin{bmatrix} b_1 \\b_2\end{bmatrix}.

Solution From the system of differential equations, we find

A=[5221] \underline{A}=\begin{bmatrix} 5 & -2 \\ 2 & 1 \\ \end{bmatrix}

Then find the eigenvalues:

det(AλI)=5λ221λ=λ26λ+9=(λ3)2=0 \begin{align*} \det(\underline{A}-\lambda\underline{I}) &= \begin{vmatrix} 5-\lambda & 2 \\ 2 & 1-\lambda \\ \end{vmatrix} \\ &=\lambda^2-6\lambda+9 \\ &= (\lambda-3)^2 \\ &=0 \end{align*}

Thus, λ=3\lambda = 3 with multiplicity 2. Next, find generalized eigenvectors

(AλI)u1=0[2222][v1v2]=0v1=v2u1=[11] \begin{align*} && (\underline{A}-\lambda\underline{I})\overline{u}_1 &= 0 \\ &\Rightarrow& \begin{bmatrix} 2 &-2\\ 2 & -2 \end{bmatrix}\begin{bmatrix} v_1 \\ v_2 \end{bmatrix} &= 0 \\ &\Rightarrow& v_1 &= v_2 \\ &\Rightarrow& \overline{u}_1 &= \begin{bmatrix} 1 \\ 1 \end{bmatrix} \end{align*}

and

(AλI)u2=u1[2222][v1v2]=[11]v1=12+v2u2=[120] \begin{align*} && (\underline{A}-\lambda\underline{I})\overline{u}_2 &= \overline{u}_1 \\ &\Rightarrow& \begin{bmatrix} 2 &-2\\ 2 & -2 \end{bmatrix}\begin{bmatrix} v_1 \\ v_2 \end{bmatrix} &= \begin{bmatrix} 1 \\ 1 \end{bmatrix}\\ &\Rightarrow& v_1 &= \frac{1}{2} + v_2 \\ &\Rightarrow& \overline{u}_2 &= \begin{bmatrix} \frac{1}{2} \\ 0 \end{bmatrix} \end{align*}

Then the generalized eigenbasis is

S=[11210] \underline{S} = \begin{bmatrix} 1 & \frac{1}{2} \\ 1 & 0 \end{bmatrix}

and the inverse (using Gaussian elimination)

S1=[0122]. \underline{S}^{-1} =\begin{bmatrix} 0 & 1 \\ 2 & -2 \end{bmatrix}.

From theory, we know the Jordon matrix should have one block and be

J=[3103] \underline{J} = \begin{bmatrix} 3 & 1 \\ 0 & 3\\ \end{bmatrix}

but if we forget (or wish to verify), we can simply use the generalized eigenbasis to find

J=S1A  S. \underline{J} = \underline{S}^{-1}\underline{A}\;\underline{S}.

Then Thus, the fundamental solution matrix is

X(t)=SeJtS1=e3t[2t+12t2t12t] \begin{align*} \underline{X}(t) &= \underline{S}e^{\underline{J}t}\underline{S}^{-1} \\ &= e^{3t}\begin{bmatrix} 2t+1 & -2t \\ 2t & 1-2t \\ \end{bmatrix} \\ \end{align*}

and the solution to the IVP is

x(t)=X(t)b=e3t[2(b1b2)t+b12(b1b2)t+b2] \begin{align*} \overline{x}(t) &= \underline{X}(t)\overline{b} \\ &= e^{3t} \begin{bmatrix} 2(b_1-b_2)t+b_1 \\ 2(b_1-b_2)t+b_2 \end{bmatrix} \end{align*}

Example 2 #

Multiplicity of three

Example 3 #

Complex eigenvalues and the real form

Non-Homogeneous Linear Systems of ODEs #

Now we can make things more fun by adding in forcing functions. Consider the non-homogenious system of differential equations

x=Ax+g(t) \overline{x}' = \underline{A}\overline{x} + \overline{g}(t)

where ARn×n\underline{A}\in\mathbb{R}^{n\times n} , x:RRn\overline{x}: \mathbb{R} \rightarrow \mathbb{R}^n and g:RRn\overline{g}: \mathbb{R} \rightarrow \mathbb{R}^n . The function g(t)\overline{g}(t) is a forcing function. Just like for scaler ODS’s there are largely two methods for handling forcing functions: method of undetermined coefficients (i.e., guessing) and method of variation of parameters. Unfortunately, the easier, method of undetermined coefficients is far more limited and the difficult (sometimes computationally impossible) method of variation of parameters is the only option.

Method of Undetermined Coefficients #

The method of undetermined coefficients (i.e., guessing) can be quite effective for scalar differential equations, but is rather limited for systems of differential equations. The only guesses that work out nicely are for constants, positive integer powers of tt , exponential functions and sin\sin or cos\cos .

First, (as always) solve the homogeneous system. This is especially important for exponential functions and sin\sin or cos\cos because one or more of the eigenvalues of A\underline{A} may conflict with the forcing function. Then, the form of the particular solution can be “guessed” with now vectors for coefficients and back solved into the equation to find the coefficients. Just like for scaler ODEs, the general solution is the sum of the homogeneous and and particular solution

x=x(h)+x(p). \overline{x} = \overline{x}^{(h)} + \overline{x}^{(p)}.

Constants #

The easiest guess is when g(t)\overline{g}(t) is constant. The appropriate guess is

x(p)=a \overline{x}^{(p)} = \overline{a}

where aRn\overline{a}\in{R}^n . Then substitute the particular solution into the non-homogeneous equation to find the constants a\overline{a} .

Positive Powers of tt #

Positive integer powers of tt is also not too tricky. If g(t)\overline{g}(t) is order rr , then the appropriate guess for the form of the particular solution is,

x(p)=k=0raktk \overline{x}^{(p)} = \sum_{k=0}^r\overline{a}_kt^k

where akRn\overline{a}_k\in\mathbb{R}^n are the constants to solve by substitution.

Exponential and Sinusoid Functions #

It’s usually easiest to use Euler’s formula to make sin\sin and cos\cos into complex exponential functions, so we will consider both exponential and sinusoids together. For an exponential forcing function of the form g(t)=keαt\overline{g}(t) = \overline{k}e^{\alpha t} , the appropriate guess for the particular solution is

x(p)=atreαt \overline{x}^{(p)} = \overline{a}t^re^{\alpha t}

where aRn\overline{a}\in\mathbb{R}^n are the coefficients to back solve, and r=njr=n_j where njn_j is the multiplicity of the eigenvalue of the system such that λj=α\lambda_j = \alpha . The basic idea is that we must ensure that the particular solution is orthogonal to the eigenspace of the homogeneous solution and we use polynomials to do that. Let’s look at a few cases to understand this better. In the case α\alpha is not an eigenvalue of the system (i.e, nj=0n_j=0 ), then r=0r=0 and

x(p)=aeαt \overline{x}^{(p)} = \overline{a}e^{\alpha t}

In the case α\alpha is an eigenvalue of the system with multiplicity nj=1n_j=1 , then

x(p)=ateαt \overline{x}^{(p)} = \overline{a}te^{\alpha t}

Notice, by multiplying by tt , it ensures that the particular solution is orthogonal to the eigenspace of the homogeneous solution and does not interfere. When α\alpha conflicts with eigenvalues of the homogeneous solution of higher order, we simply use a one higher polynomial power as that of the associated Jordon Block to ensure orthogonality.

Method of Variation of Parameters #

Variation of parameters is guaranteed to be valid. However, often the integration is impossible. Let ARn×n\underline{A}\in\mathbb{R}^{n\times n} and g(t):RRn\overline{g}(t): \mathbb{R} \rightarrow \mathbb{R}^n be a continuous map from time into Rn\mathbb{R}^n . Then for every (t0,x0)Rn+1(t_0, x_0)\in\mathbb{R}^{n+1} , the initial value problem

x(t)=Ax(t)+g(t),x(t0)=x0 \overline{x}'(t) = \underline{A}\overline{x}(t) + \overline{g}(t), \overline{x}(t_0) = \overline{x}_0

has a unique global solution x(t,t0,x0)x(t,t_0, x_0) given by the formula

x(t,t0,x0)=eAtx0+t0teA(ts)g(s)ds. x(t,t_0, x_0) = e^{\underline{A}t}\overline{x}_0 + \int_{t_0}^te^{\underline{A}(t-s)}\overline{g}(s)ds.

Examples #

TODO

Notes #

  1. Actually, A\underline{A} can be any linear operation on a finite normed vector space.

Ref #

[1] Sideris 2013

[2] Kreyszig 2011

[3] Schaeffer 2016

[4] Hirsch 2013