MA40177: Scientific Computing Assignment 1
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
MA40177: Scientific Computing
Assignment 1
1 Turing instability in reaction-diffusion systems (a.k.a. stripes & spots)
This assignment is about practical aspects of solving nonlinear differential equations
= Ay(t) + N(y(t)), on t e [0, T] (1)
y(0) = y0
using operator splitting time schemes, where A e R2n×2n is a given stiff square matrix, N(y) : R2n → R2n is a non-stiff but nonlinear function, y0 e R2n is a given initial state at time t = 0, and n e N is a discretisation size in space to be defined in Section 1.1 below. Equations of the form (1) arise for example in the numerical solution of reaction-diffusion equations, that is, systems of Partial Differential Equations (PDEs) that combine linear diffusion of substances with nonlinear chemical reactions between them. Such systems may demonstrate a surprising variety of complex patterns. Those can be observed both in nature (spots and stripes on butterflies, fish or zebras) and in lab (diffusive auto-catalysis in open gel reactors). The precise pattern may change significantly even after a small variation of coefficients in (1). This kind of instability was called the Turing instability, after Alan Turing’s pioneering study of morphogenesis [1].
A particular model we consider here was formulated originally by Gray and Scott, and analysed extensively by Pearson [2]. It consists of two reactions
U + 2V → 3V
V → 0
between two substances U and V . The substance U is being fed to the reactor at a constant rate F > 0 and morphed into the substance V by the first reaction, catalysed by V itself (hence the name auto-catalysis). In contrast, the substance V is being removed from the reactor at a constant “kill” rate k > 0. Moreover, both U and V can diffuse through the reactor at rates Du > 0 and Dv > 0, respectively. In the first assignment, we start with a simple one-dimensional circular reactor, such that u(x, t) 2 0, the concentration of U , and v(x, t) 2 0, the concentration of V , become periodic functions of the spatial position x e [0, 1] in the reactor. Putting these assumptions together, we arrive at the Gray-Scott system of PDEs defining the concentrations:
∂u(x, t) 2 ∂2u
∂t ∂x2
∂v(x, t) 2 ∂2v
∂t ∂x2
u(1, t) = u(0, t),
u(x, 0) = u0 (x),
on x e [0, 1], t e [0, T]
(Boundary conditions) (Initial conditions)
(5)
In our assignment, the diffusion coefficients are fixed to Du = 2 . 10_5 and Dv = 10_5 .
To arrive from (2)– (5) to (1) we need to discretise the spatial variable x, and, correspondingly, the functions u and v . We divide the interval [0, 1] into n e N subintervals of length h = 1/n.
Then we can define discrete fields
ui = u(ih, t), vi = v(ih, t), (6)
where i = 1, . . . , n. The derivatives of u, v at the midpoints x = (i _ 1/2)h are approximated by the central finite difference as
│x=(i_1/2)h → , │x=(i_1/2)h → ,
with the periodic boundary conditions (4) implying that
un+1 = u1 , u0 = un, vn+1 = v1 , v0 = vn . (7)
Applying the central difference scheme again to get the second derivatives, we obtain an approximation at the whole points,
│x=ih → ,
subject to discrete boundary conditions (7). Collecting discrete values ui, vi into vectors
│x=ih →
(8)
u(t) = [u1 u2 . . . un]T , v(t) = [v1 v2 . . . vn]T ,
we can write the second central differences (8) as matrix-vector products ∆u, ∆v with the
matrix ┌(_12 1 ( h2 0 ! 1 |
1 _2 . . . . . . 0 |
. . . 1 . . . 1 . . . |
0 . . . . . . _2 1 |
0(1) !) e Rn×n . 1 ) _2| |
In turn, nonlinear reaction terms in (2),(3) can be written via vector-functions
Nu(u, v) = _u o v o v + [F . . . F]T _ Fu, Nv(u, v) = u o v o v _ (F + k)v,
where “ o” denotes multiplication of vectors elementwise.
1.2 Discretisation in time and Strang splitting
Finally, introducing block vectors
y(t) = ┐ , y0 = ┐ , N(y) = ┐ ,
and a block matrix
A = ┌Du∆ Dv∆┐ ,
(9)
(10)
we can write the spatially-discrete equations in the differential equation form (1). However, Ay and N(y) desire different time integration schemes:
● the first term (linear and stiff) can be tackled most efficiently with an implicit scheme (such as Crank-Nicolson) that is stable for a wide range of time steps, whereas
● the second term warrants an explicit scheme (such as Runge-Kutta) to avoid solving nonlinear equations, whereas the stability of bounded reaction coefficients is not an issue.
Those can be merged together by using so-called splitting schemes. The motivation stems from the solution of a linear equation dy/dt = (A + B)y (where B is another matrix) in the form of the matrix exponential y = exp(t(A + B))y0 . In contrast to real numbers, for matrices exp(tA + tB) exp(tA) exp(tB) in general, but for small t the product of exponentials approximates the exponential of sum with an error proportional to t. A more accurate approximation is provided by the Strang splitting [3] exp(tA/2) exp(tB) exp(tA/2). In turn, the product of each of those matrix exponentials by a vector can be approximated by a more traditional time scheme (Crank-Nicolson or Runge-Kutta). Lifting this idea to nonlinear terms gives us the Strang splitting for (1), outlined in Algorithm 1.
Algorithm 1 Strang splitting method |
1∶ Input: initial vector y0 = y(0), time step size τ > 0, final time T > 0. 2∶ for e = 0, . . . , IT/τ[ _ 1 do 3∶ Let te = eτ . 4∶ Solve dz/dt = Az for t e [0, τ/2] starting from z0 = z(0) = ye . B cf.(11) 5∶ Solve dw/dt = N(w) for t e [0, τ] starting from w0 = w(0) = z(τ/2). B cf.(12) 6∶ Solve dq/dt = Aq for t e [0, τ/2] starting from q0 = q(0) = w(τ). B cf.(11) 7∶ Let ye+1 = q(τ/2) s y(te + τ). 8∶ end for |
To approximate the solution in Lines 4 and 6 we use the Crank-Nicolson scheme, which can be implemented by solving linear systems
╱I _ A、z1 = ╱I + A、z0 ,
╱I _ A、q1 = ╱I + A、q0
(11)
with respect to the vectors z1 s z(τ/2) and q1 s q(τ/2), where I is the identity matrix. For Line 5 we use the order-2 Runge-Kutta method, also known as the Heun method:
N(w0 ) + N(w1/2)
where w1 s w(τ). Both (11) and (12) have second order of consistency, that is, ]z(τ/2) _ z1 ]2 = o(τ3 ), ]q(τ/2) _ q1 ]2 = o(τ3 ), ]w(τ) _ w1 ]2 = o(τ3 ).
Here ]y]2 is the usual 2-norm defined for any vector y = [y1 , . . . , y2n]T as ]y]2 = |么i(2)1 yi(2) . It
can be shown (see Q6 in Part III of the assignment) that the entire Strang splitting approximation ye s y(eτ) has the second order of consistency as well.
1.3 Improvements
Since A is a block matrix, we can keep the original vectors u and v throughout the computations in Algorithm 1. We can also store only the current time step. Only a few extra vectors are then actually needed in intermediate steps.
The Crank-Nicolson scheme (11) depends on four matrices, which we can denote
For brevity, we can define the Courant numbers
τu = and τv = .
It can be shown (see Q7 in Part III) that Ml* and Mr* (where * stands for u or v) can be written as a sum of a banded matrix and a rank-1 matrix,
Ml* = Bl* _ τ* ccT , Mr* = Br* + τ* ccT , where c = [1
0 . . . 0 1]T ,
(14)
and
(
Bl* = ( . . . . . . . . .
(!
!
)
)
) ,
)
)
┌ 1 _ 3τ* τ*
(
Br* =
(!
!
) ) ) )
are matrices with bandwidth 1. Both multiplication by a vector and solution of a linear system can be computed much faster if the matrix is stored in an appropriate banded form. To solve linear systems with Ml* we can show that these matrices can be written as
Hence, to solve a system Mluu = ru (and similarly for v), we can proceed as follows: in a setup phase (outside the timestepping loop),
1. calculate u , v by solving the banded systems Bluu = c, Blvv = c,
2. calculate ρu, ρv and hu , hv as shown in (16).
Then, given the precomputed Bl* and h* , the equations Mluu = ru , Mlvv = rv can be solved at every time step as follows:
1. calculate uˆ by solving Bluuˆ = ru and vˆ by solving Blvvˆ = rv ,
2. calculate u = uˆ + hu(cT uˆ ) and v = vˆ + hv(cTvˆ ).
Note that only solves with the banded, symmetric positive definite matrices Bl* are needed.
2 The Assignment
Part I: Basic Implementation
Copy over the files from the directory $MA40177_DIR/assignment1. They are designed to solve numerically the model problem defined above. In particular, the matrices Mlu , Mlv , Mru , Mrv defined in (13) are created in the subroutine in create_matrices .f90. However, the subroutines in crank_nicolson .f90, heun .f90 and timestepping .f90 needed for Algorithm 1 are currently empty. It will be your task to write them.
Q1: Crank-Nicolson step
Implement a subroutine crank_nicolson(n,Mlu,Mlv,Mru,Mrv,u,v) in the file crank_nicolson .f90 which gets passed the grid size n, the matrices Mlu , Mlv , Mru , Mrv , and the vectors u and v, in this order. On input to the subroutine, the vectors u and v should contain the initial conditions of the corresponding components (that is, top and bottom parts of z0 or q0 in Algorithm 1). On completion of the subroutine, the vectors u and v should contain the parts of z1 or q1 , approximating the exact solution at time τ /2. That is, this subroutine must compute the matrix-vector products ru = Mruu and rv = Mrvv, and solve the linear systems Mluu = ru , Mlvv = rv .
Write a subroutine test_crank(n,Mlu,Mlv,Mru,Mrv) in the file test_crank .f90 which gets passed the problem size n and the matrices Mlu , Mlv , Mru , Mrv, and tests that the solution computed by crank_nicolson is accurate. There are several possible tests you can design. The most straightforward one is to create vectors utest , vtest filled with random values (you can use the Fortran subroutine random_number(u_test)), and reverse the Crank-Nicolson scheme, computing uin = Mru(_1)(Mluutest) and vin = Mrv(_)1 (Mlvvtest). Then, we call crank_nicolson with uin and vin as inputs, returning some output vectors u, v. Finally, we calculate and print the errors ]u _ utest]2 , ]v _ vtest]2 . There is a more easy (but less general) test that is based on the properties of the matrix ∆ . You can have only one test implemented in the final submission, but you should comment about assumptions and outcomes of this test in your code.
Run the test(s) by calling the test_crank subroutine once in the main program grayscott .f90 before a call to timestepping. Use suitable optimised BLAS/LAPACK library calls wherever this is possible. Only us
2023-02-11