ode coupled

This post is adapted from "ODE-Coupled" By the lovely Prof. J. Kazdan for U Penn MTH 312 in Spring 2013.

intro

The given matrix \(A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\) represents an orthogonal reflection across the line \(x_1 = x_2\) in a two-dimensional space.

When we apply this reflection, the points that lie on the line \(x_1 = x_2\) remain unchanged, while the points on the line \(x_2 = -x_1\) are reflected across the line.

To understand the effect of this reflection on specific vectors, let's consider two vectors: \(\vec{v}_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\) and \(\vec{v}_2 = \begin{pmatrix} 1 \\ -1 \end{pmatrix}\).

When we apply the matrix \(A\) to \(\vec{v}_1\), we find that \(A\vec{v}_1\) equals \(\vec{v}_1\) itself. Similarly, \(A\vec{v}_2\) results in \(-\vec{v}_2\), which is the negation of \(\vec{v}_2\).

These observations reveal that \(\vec{v}_1\) and \(\vec{v}_2\) are eigenvectors of the matrix \(A\), each associated with a corresponding eigenvalue. For \(\vec{v}_1\), the eigenvalue is \(\lambda_1 = 1\), while for \(\vec{v}_2\), the eigenvalue is \(\lambda_2 = -1\).

These eigenvectors form a basis for the two-dimensional vector space \(\mathbb{R}^2\), which means that any vector in \(\mathbb{R}^2\) can be expressed as a linear combination of \(\vec{v}_1\) and \(\vec{v}_2\). This basis provides a useful framework for solving problems and analyzing transformations associated with the matrix \(A\).

To illustrate the concepts, let's solve the system of differential equations:

\[ \begin{aligned} \frac{dx_{1}}{dt} &= x_{2} \\ \frac{dx_{2}}{dt} &= x_{1} \end{aligned} \]

We can rewrite this system in vector form as:

\[ \frac{d\vec{x}}{dt} = A\vec{x} \]

where

\[ \vec{x}(t) = \begin{pmatrix} x_{1}(t) \\ x_{2}(t) \end{pmatrix} \]

and

\[ A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \]

The system represents a set of coupled differential equations because the rate of change of \(x_{1}\) depends on \(x_{2}\), and vice versa.

To solve this system, we need to find the function \(\vec{x}(t)\) that satisfies the given equations. We are also given initial conditions: \(x_{1}(0) = 4\) and \(x_{2}(0) = 0\), which specify the values of \(x_{1}\) and \(x_{2}\) at the initial time \(t = 0\).

By solving the system of equations, we can determine how \(x_{1}\) and \(x_{2}\) evolve over time.

method one

We can solve the system of differential equations using the eigenvectors of matrix \(A\) as our new basis. Let's express the solution as:

\(\vec{x}(t) = u_1(t) \vec{v}_1 + u_2(t) \vec{v}_2\)

Here, \(u_1(t)\) and \(u_2(t)\) are coefficients that we need to find. By substituting this expression into both sides of equation (1), we find:

\(\frac{d\vec{x}(t)}{dt} = \frac{du_1(t)}{dt} \vec{v}_1 + \frac{du_2(t)}{dt} \vec{v}_2\)

Since the eigenvectors \(\vec{v}_1\) and \(\vec{v}_2\) do not depend on \(t\), we can differentiate them with respect to \(t\) separately. We also know that the eigenvectors satisfy:

\(A\vec{x} = u_1(t) A\vec{v}_1 + u_2(t) A\vec{v}_2 = u_1(t) \vec{v}_1 - u_2(t) \vec{v}_2\)

Therefore, combining the results from equation (1), we get:

\(0 = \frac{d\vec{x}(t)}{dt} - A\vec{x}(t) = \left(\frac{du_1(t)}{dt} - u_1(t)\right) \vec{v}_1 + \left(\frac{du_2(t)}{dt} + u_2(t)\right) \vec{v}_2\)

Since the eigenvectors \(\vec{v}_1\) and \(\vec{v}_2\) are linearly independent, the coefficients on both sides of the equation must be zero:

\(\frac{du_1(t)}{dt} - u_1(t) = 0\)

\(\frac{du_2(t)}{dt} + u_2(t) = 0\)

These equations are uncoupled and can be solved separately. The solutions are:

\(u_1(t) = c_1e^t\) and \(u_2(t) = c_2e^{-t}\), where \(c_1\) and \(c_2\) are constants to be determined by the initial conditions.

Substituting these solutions back into equation (2), we find:

\(\vec{x}(t) = c_1e^t\begin{pmatrix} 1 \\ 1 \end{pmatrix} + c_2e^{-t}\begin{pmatrix} 1 \\ -1 \end{pmatrix} = \begin{pmatrix} c_1e^t + c_2e^{-t} \\ c_1e^t - c_2e^{-t} \end{pmatrix}\)

To determine the constants \(c_1\) and \(c_2\), we use the initial condition:

\(\vec{x}(0) = \begin{pmatrix} 4 \\ 0 \end{pmatrix} = c_1\begin{pmatrix} 1 \\ 1 \end{pmatrix} + c_2\begin{pmatrix} 1 \\ -1 \end{pmatrix}\)

Solving this equation, we find \(c_1 = c_2 = 2\), so the desired solution is:

\(\vec{x}(t) = \begin{pmatrix} 2e^t + 2e^{-t} \\ 2e^t - 2e^{-t} \end{pmatrix}\)

This can be further expressed as:

\(x_1(t) = 2e^t + 2e^{-t}\) and \(x_2(t) = 2e^t - 2e^{-t}\)

Basically, we solve a system of differential equations using the eigenvectors of matrix \(A\) as a basis. We express the solution as \(\vec{x}(t) = u_1(t) \vec{v}_1 + u_2(t) \vec{v}_2\), where \(u_1(t)\) and \(u_2(t)\) are coefficients that we need to find. By substituting this expression into the differential equations, we obtain separate equations for \(u_1(t)\) and \(u_2(t)\) that can be solved independently. The solutions are \(u_1(t) = c_1e^t\) and \(u_2(t) = c_2e^{-t}\), where \(c_1\) and \(c_2\) are constants determined by the initial conditions. Substituting these solutions back into the expression for \(\vec{x}(t)\), we obtain the desired solution. In this case, the solution is \(\vec{x}(t) = \begin{pmatrix} 2e^t + 2e^{-t} \\ 2e^t - 2e^{-t} \end{pmatrix}\), which can be further expressed as \(x_1(t) = 2e^t + 2e^{-t}\) and \(x_2(t) = 2e^t - 2e^{-t}\).

method two

This is essentially identical, but here we explicitly introduce the change of coordinates \(S\) from the standard basis to the new basis consisting of the eigenvectors of \(A\). We want \(S^{-1} A S=D\) where \(D\) is the diagonal matrix consisting of the eigenvalues of \(A\), so

In Method 2, we introduce a change of coordinates from the standard basis to a new basis consisting of the eigenvectors of matrix \(A\). We denote this change of coordinates as matrix \(S\). The matrix \(D\) is a diagonal matrix consisting of the eigenvalues of \(A\). Specifically, in this case, \(D\) is:

\[ \begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{pmatrix} \]

which is equivalent to:

\[ \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \]

We compute the product \(SD\) by multiplying the columns of \(S\) by the corresponding eigenvalues \(\lambda_j\). The columns of \(S\) are the eigenvectors of \(A\).

By substituting \(A = SD\) into equation (1), we have:

\[ \frac{d\vec{x}}{dt} = ASD^{-1}\vec{x} \]

To simplify this expression, we introduce the change of variable \(\vec{u} = S^{-1}\vec{x}\). This transforms the equation into:

\[ \frac{d\vec{u}}{dt} = D\vec{u} \]

These equations are exactly the same as the equations (3) we found previously.

By solving these uncoupled equations, we find:

\[ \vec{u}(t) = \begin{pmatrix} c_1e^t \\ c_2e^{-t} \end{pmatrix} \]

Substituting this back into the expression for \(\vec{x}(t)\) using \(S\), we obtain the same solution as before:

\[ \vec{x}(t) = \begin{pmatrix} c_1e^t + c_2e^{-t} \\ c_1e^t - c_2e^{-t} \end{pmatrix} \]

We can determine the constants \(c_1\) and \(c_2\) using the initial condition, just as we did in Method 1.

excersize

In this scenario, we have a sequence of vectors \(\vec{x}_1, \vec{x}_2, \ldots\) that follow the property \(\vec{x}_{k+1} = A \vec{x}_k\), where \(A\) is the given \(2 \times 2\) matrix. We are given the initial vector \(X_0 = \begin{pmatrix} 3 \\ 2 \end{pmatrix}\). Our goal is to compute \(\vec{x}_k\) using a basis consisting of the eigenvectors of \(A\), i.e., \(x_k = a_k \vec{v}_1 + b_k \vec{v}_2\).

Since \(A\) is an orthogonal reflection, the solution can be determined without explicitly computing the eigenvectors. The answer is straightforward: if \(k\) is even, then \(\vec{x}_k = \vec{x}_0\), and if \(k\) is odd, then \(\vec{x}_k = \vec{x}_1\) (which is the reflected vector). This result holds because the matrix \(A\) only performs a reflection, and the behavior of the sequence is predictable based on the parity of \(k\).

The purpose of this problem is to demonstrate that this simple computation using the eigenvectors works for any \(n \times n\) matrix \(A\) that can be diagonalized, not just the specific \(2 \times 2\) matrix in this example.