Homework

Homework #

ODE solvers (or time-stepping methods - numerical solution of IVPs) 2 #

Table of Contents#

%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# the following allows us to plot triangles indicating convergence order
from mpltools import annotation

from scipy.integrate import odeint
from scipy.integrate import solve_ivp

from matplotlib import rcParams
# font sizes for plots
plt.rcParams['font.size'] = 12
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.sans-serif'] = ['Arial', 'Dejavu Sans']

Homework - SciPy methods comparison#

Consider the ODE

\[ y' = y - t^2 +1, \;\;\;\;\; y(0) = \frac{1}{2}, \]

with exact solution

\[ y(t) = (t + 1)^2 - \frac{e^{t}}{2}. \]

Read the docs for some of the SciPy ODE solvers, e.g. scipy.integrate.odeint and scipy.integrate.solve_ivp.

Use a range of methods to integrate this problem from \(t=0\) to \(t=4\), vary the rtol and atol error tolerance parameters (for simplicity just set them equal to the same value for each experiment) and plot the error for each of the methods at the end of the simulation period against the tolerance value.

Homework - The van der Pol problem#

Consider now the van der Pol oscillator problem which can be used to model electrical circuits.

For background see:

https://en.wikipedia.org/wiki/Van_der_Pol_oscillator

http://mathworld.wolfram.com/vanderPolEquation.html

https://archimede.dm.uniba.it/~testset/problems/vdpol.php

This second-order problem can be written as the first-order system

\[ y_1' = y_2, \;\;\;\; y_2' = \mu (1 - y_1^2)y_2 - y_1,\]

where \(\mu\) is a parameter we can vary to change the characteristics of the problem. Start with a value of \(\mu = 100\), and you could then try making this smaller/larger (but be careful about making it much larger as the problem becomes stiff and our solvers not designed to handle stiff problems start taking a very long time to complete! Refer to the final questions which consider errors vs CPU times for non-stiff and then stiff problem).

Consider a case with initial condition \(y_1 = 2\) and \(y_2=0\), and integrate up to time \(t=2\mu\).

Repeat the error vs tolerance analysis from the previous question.

To compute the error use a Radau solver with a very tight tolerance.

Homework - L-stability [\(\star\)]#

From the lecture (the discussion on L-stability), consider the problem

\[y'(t) = \lambda(y - \cos(t)) - \sin(t), \;\;\;\; y(0) = y_0.\]

The exact solution to this problem is

\[y(t) = \text{e}^{\lambda t}(y_0 - 1) + \cos(t).\]

Write some code that time steps this problem with the initial condition \(y_0=1.5\) using both the backward Euler and the trapezoidal schemes.

Note that since this is a scalar linear problem, we can just rearrange our implicit schemes for this problem to arrive at updates of the form y[n+1] = ... where the RHS of the expression contains things we know, meaning you do not have to call a nonlinear solver.

Try to choose values of \(\lambda\) the give the behaviour for the two schemes as presented in the image in the L-stability section of the lecture.

Verify the claims we made in the lecture: “Note that reducing the time step does not help. But this problem with the trapezoidal scheme does not manifest if we start with the initial condition \(y_0=1\).”

Homework - Implementing Runge-Kutta 4 stage method (RK4)#

Write a general Python function that implements the classical RK4 method for a given RHS function, and apply it to the problem we used previously to compare the errors between forward Euler and improved Euler (recalling that we can interpret IE as a predictor-corrector LMS pair, or as a Runge-Kutta method RK2(\(\alpha=1\))):

\[y'(t)=y,\;\;\; y(0)=1,\]

and where we evaluate the error at the time \(t = 2\pi\).

Homework - Implementing Adams-Bashforth 4-step method (AB4)#

In the lecture we derived AB2 by choosing our free parameters (\(\beta_0\,\) and \(\,\beta_1\)) such that we could integrate exactly the monomials \(f(t) = 1\) and \(f(t)=t\). This led to two simultaneous equations we could trivially solve.

Try extending this to the case of AB4 - now we have four free parameters (\(\beta_0, \ldots, \beta_3\)) and we need to integrate polynomials up to degree 3 exactly.

Derive the corresponding system of 4 equations for 4 unknowns.

[Hint: your life will be easier when computing the integrals if you consider a polynomial basis of the form \(P_{N+1}(t) = t (t + \Delta t) \dots (t + N\Delta t)\)

(note that this is consistent with what we did for AB2, but now we aren’t using monomials any more for \(N>1\), i.e. additionally consider the cases \(f(t)=t(t+\Delta t)\) and \(f(t)=t(t+\Delta t)(t+2\Delta t)\)).

Can you see where this hint came from - why is this a convenient choice, if again as per the AB2 derivation in the lecture we assume \(t_{n+1} = \Delta t,\; t_n = 0\), \(t_{n-1} = -\Delta t\), \(t_{n-2} = -2\Delta t\), …?

Write a script which forms and solves the resulting linear system for the AB coefficients, for \(k=4\).

Check your coefficients agree with those given in the lecture, e.g. with something like

AB4_beta = np.linalg.solve(LHS_A, RHS_b)  
print('Our calculated AB4 coefficients: ',AB4_beta)
print('Our coefficient agree with what we wrote in the lecture: ',np.allclose(AB4_beta, np.array([ 55./24., -59./24., 37./24., -9./24.])))

]

and then implement and verify the method following the implementation and convergence approach taken in the homework exercise from the previous lecture (Homework: Improved Euler and accuracy comparison with forward Euler).

As the scheme isn’t self-starting use the appropriate number of time steps from an appropriate order RK method to start things off.

Homework - Adaptive time stepping (implementing RK45 - the Dormand-Prince embedded RK pair) [\(\star\star\)]#

Try implementing the Dormand and Price embedded RK pair (https://en.wikipedia.org/wiki/Dormand–Prince_method) with adaptive time stepping.

[Given 4th and 5th order approximations to the solution at the new time level (y4 and y5), the error control component of the method can be written something like

        # error control
        e = np.linalg.norm(y5 - y4)/np.sqrt(y.size)
        dt_next = 0.9 * (tol/e)**(1./5.) * dt
        if e < tol:
            y = y5
            y_all.append(y)
            t = t + dt
            t_all.append(t) 
        dt = dt_next

where \(e\) is an estimate of the local error obtained as the norm of the difference between the two numerical solutions and 0.9 is a safety factor. Note that additional factors can easily be incorporated to limit how rapidly the time step size is allowed to increase or decrease. We haven’t bothered to do this here and this is why our dt’s don’t quite match those obtained with SciPy.

For more information on this if interested see Chapter II.4 (Practical Error Estimation and Step Size Selection) of Solving Ordinary Differential Equations I: Nonstiff Problems by Hairer, Nørsett and Wanner.]

To make your life easier the entries of the Butcher tableau for the scheme are given in the following cell.

Test the solver of one of the problems we have seen before.

You should also test that your implementation works for both scalar and vector problems.

Homework - The Lorenz system (with RK4 and RK45)#

Repeat the Lorenz system exercise from the previous lecture’s homework using our new RK4 and RK45 solvers.

Homework - Perturbing initial conditions (in the Lorenz problem) [\(\star\)]#

In order to think about how long we can reasonably expect to follow a complex trajectory such as in the Lorenz system before we start to see divergence, see what happens if you start two simulations with slightly different initial conditions. i.e. perturb your initial condition using a small random vector and track how the trajectories diverge.

I suggest you try this with RK4.

Homework - The Kepler problem with adaptive time stepping [\(\star\)]#

Repeat the Kepler problem exercise from Lecture 5’s homework using our own RK45 method (or alternatively Scipy’s solve_ivp with the option RK45).

Plot how the time step size varies in time - does it make sense when you have larger vs smaller time steps?

See if you get agreement for the time step sizes between our and SciPy’s RK45 implementation.

Homework - Structure preservation (geometric integrators applied to the Kepler problem) [\(\star\star\)]#

Implement the symplectic Euler method and compare this to our other three methods we have applied to the Kepler problem.

For details of the method see https://en.wikipedia.org/wiki/Semi-implicit_Euler_method.

As the name implies this is an example of a symplectic time stepping method. We don’t have time to go into what this means, we simply state that methods such as this (a class of geometric integrator) well suited to problems where it is important to try and preserve conservation laws (as well as related “geometric” structures.

Homework - ODE solver timings (non-stiff problems) [\(\star\star\)]#

In an earlier homework question we compared a variety of SciPy’s ODE solver methods on the problem

\[ y' = y - t^2 +1, \;\;\;\;\; y(0) = \frac{1}{2},\]

which has the exact solution

\[ y(t) = (t + 1)^2 - \frac{e^{t}}{2}. \]

We integrated from \(t=0\) and evaluating the error at \(t=4\).

We varied the rtol and atol error tolerance parameters to generate plots of the ‘error’ vs the ‘user-specified error tolerance’.

This was interesting but of far more value would be an analysis of errors vs run times.

Extend the homework exercise by recording solution timings (e.g. look back at Lecture 3’s use of %timeit to provide estimates of run times.)

What do you observe from these results, and the differences observed between the ‘error’ vs ‘tolerance’ and ‘error’ vs ‘run time’ plots?

Finally, see how our own implementations of improved Euler (IE), AM3, RK4 and RK45 (i.e. our Dormand & Prince adaptive time stepping solver) compare. What conclusions can you draw from this comparison?

Homework - ODE solver timings (stiff problems) [\(\star\star\)]#

Consider now a vector problem demonstrating stiff behaviour.

Recall from the lecture the problem of the form

\[ y'' + (\mu + 1) y' + \mu y = 0 , \]

or equivalently

\[\begin{split} \boldsymbol{z}'=A\boldsymbol{z}\;\;\;\text{where} \;\;\;\; \boldsymbol{z} = \begin{bmatrix} y\\ y' \end{bmatrix} \;\;\;\text{and} \;\;\;\; A = \begin{bmatrix} 0 & 1\\ -\mu & -(\mu + 1) \end{bmatrix},\end{split}\]

with parameter \(\mu\).

The general solution to this problem is given by

\[ y(t) = C_1 \text{e}^{\lambda_1 t} + C_2 \text{e}^{\lambda_2 t}, \]

where \(\lambda_1\) and \(\lambda_2\) are the eigenvalues of the matrix \(A\), and with the constants \(C_1\) and \(C_2\) defined by the initial conditions specified for \(y\) and \(y'\).

What are the eigenvalues for this problem in terms of \(\mu\)?

Choose values of \(\mu\) that lead to a problems that can be characterised as stiff and non stiff.

Perform the same analysis of error vs run times as in the previous question, but focus on SciPy’s solve_ivp with the RK23, RK45, BDF and Radau options.

I suggest you use an initial condition of \(y(0)=1\) and \(y'(0) = \mu - 2\) and integrate up to a time of \(t=1\).

Comment on what you observe, and if (and how) your advice on the best performing approach changes as you change the stiffness of the problem.