6. Analytical Solutions I#

Some special classes of ODEs admit pencil-and-paper solutions.

Let’s work through a sample of these that commonly appear in economics.

6.1. Linear, first-order ODEs#

We have already seen some of these in the examples above.

Linear ODEs can be further classified as ones with

  • constant, or,

  • time-dependent

coefficients.

6.1.1. Scalar case#

Constant-coefficients. This type of ODEs has the form

(6.1)#\[ \dot{x} + ax = b, \]

where \(a\) and \(b\) are constants.

Shall we make a guess? Here’s how we may begin.

If a differential equation describes the instantaneous rate of change of a state at a given point in the system’s state space, then its solution must be some integral of the ODE.

In the case of Equation (6.1), we can deduce that if the instantaneous rate of change is (locally) constant for a given state \(x(t)\), then its anti-derivative must be of some exponential form.

What is the derivative of the function \(x(t)e^{at}\) with respect to \(t\)?

Let’s check:

\[\begin{split} \begin{split} \frac{d\left(x(t)e^{at}\right)}{dt} &= \left[\frac{dx(t)}{dt} + ax(t)\right]e^{at} \\ &\equiv \left[\dot{x} + ax\right]e^{at} \end{split}. \end{split}\]

So we could re-write Equation (6.1) as:

(6.2)#\[ \left[\dot{x} + ax\right]e^{at} = be^{at}. \]

All we did was multiply (6.1) on both sides by \(e^{at}\). (This function is also known as the integrating factor for this class of ODEs.)

Also, note that \(d\left(e^{at}\right)/dt = ae^{at}\).

Now, (6.2) is just

(6.3)#\[ \frac{d\left(x(t)e^{at}\right)}{dt} = \frac{b}{a}\frac{d\left(e^{at}\right)}{dt} . \]

Notice how we re-wrote the RHS?

If we apply an indefinite integral on both sides of (6.3), we have

(6.4)#\[ x(t)e^{at} = \frac{b}{a}e^{at} + C, \]

where \(C\) is an arbitrary constant of integration.

Re-write (6.4) for the general solution:

(6.5)#\[ x^{\ast}(t) = \frac{b}{a}+ Ce^{-at}. \]

If we are supplied with an initial value, say, \(x(t_{0}) = x_{0}\), then we may derive:

\[ x^{\ast}(t_{0}) = x(t_{0}) = x_{0} = \frac{b}{a}+ Ce^{-at_{0}}. \]

That is, we can back out the number

\[ C = \left(x_{0}-\frac{b}{a}\right)e^{at_{0}}. \]

This gives us a particular solution,

(6.6)#\[ x^{\ast}(t) = \frac{b}{a}+ \left(x_{0}-\frac{b}{a}\right)e^{-a(t-t_{0})}, \]

which depends on the given value of \(x_{0}\), and on when we start off the solution, \(t_{0}\).

6.1.2. Homogeneous case#

If \(b = 0\), Equation (6.1) reduces to

(6.7)#\[ \dot{x} + ax = 0. \]

6.1.3. Time-varying parameters#

What if \(a\) and \(b\) are functions of time themselves? That is, Equation (6.1) is now the more general case of

(6.8)#\[ \dot{x} + a(t)x = b(t), \]

where \(a(t)\) and \(b(t)\) are given by known functions of time.

Let’s tweak our intuitive guesswork earlier.

The general solution becomes

(6.9)#\[ x^{\ast}(t) = \frac{\int I(t) b(t) dt}{I(t)} + C, \]

with \(I(t) = e^{\int a(t)dt}\) and \(C\) being some other arbitrary constant of integration.

6.2. Multiple-variable case#

The insights from scalar, first-order ODEs also apply to their higher-dimensional cousins.

6.2.1. Autonomous ODEs#

First, let’s restrict attention to linear first-order ODEs that are autonomous (i.e., ones with constant coefficients).

Compare Equation (6.1) with its \(k\)-dimensional variant now:

(6.10)#\[ \dot{\mathbf{x}} = \mathbf{A}\mathbf{x} + \mathbf{b}. \]

Recall Example 3.2. This was an homogeneous special case of (6.10) where \(\mathbf{b} = \mathbf{0}\), so that

(6.11)#\[ \dot{\mathbf{x}} = \mathbf{A}\mathbf{x}. \]

Let’s first consider the linear, homogeneous case in Equation (6.11) that is associated with its non-homogeneous sibling in Equation (6.10).

Proposition 6.1 (Principle of superposition)

If \(\mathbf{x}_{1}^{\ast}(t)\) and \(\mathbf{x}_{2}^{\ast}(t)\) are solutions to Equation (6.11) then \(c_{1}\mathbf{x}_{1}^{\ast}(t) + c_{2}\mathbf{x}_{2}^{\ast}(t)\), where \(c_{1},c_{2} \in \mathbb{R}\), is also a solution to Equation (6.11).

This result is a consequence of linear systems and the proof is easy!

In fact, the following insight, extending from Proposition 6.1, tells us that we may focus on solving just the homogeneous system:

Corollary 6.1

If \(\mathbf{x}_{1}^{\ast}(t)\) is a solution to Equation (6.10) and \(\mathbf{x}_{2}^{\ast}(t)\) is a solution to Equation (6.11), then \(\mathbf{x}_{1}^{\ast}(t) + \mathbf{x}_{2}^{\ast}(t)\) is also a solution to Equation (6.10).

Once we have the solution of the associated homogeneous system in Equation (6.11), we can just “add back” the particular solution to the non-homogeneous system in Equation (6.10) to define its general solution.

Why? The answer lies in the next question.

6.2.2. Multiple-variable and non-autonomous case#

More generally, we have the same principle applying to the setting where the linear ODE has known time-dependent coefficients.

Now, compare Equation (6.10) with its non-autonomous auntie:

(6.12)#\[ \dot{\mathbf{x}} = \mathbf{A}(t)\mathbf{x} + \mathbf{b}(t), \]

where \(t \mapsto \mathbf{A}(t)\) and \(t \mapsto \mathbf{b}(t)\) are known continuous functions of time.

Equation (6.12) has an associated homogeneous sister:

(6.13)#\[ \dot{\mathbf{x}} = \mathbf{A}(t)\mathbf{x}. \]

Theorem 6.1 (Principle of superposition, reloaded)

  1. If \(\mathbf{x}_{1}^{\ast}(t)\) and \(\mathbf{x}_{2}^{\ast}(t)\) solve (6.13), then \(c_{1}\mathbf{x}_{1}^{\ast}(t) + c_{2}\mathbf{x}_{2}^{\ast}(t)\) also solves (6.13), for any \(c_{1}, c_{2} \in \mathbb{R}\).

  2. If \(\mathbf{x}_{1}^{\ast}(t)\) and \(\mathbf{x}_{2}^{\ast}(t)\) solve (6.12), then their difference solves (6.13).

  3. Any solution to (6.12) can be constructed as a sum of a fixed or particular solution to (6.12) and some solution to (6.13).