MATH0033 Numerical Methods 2018-2019
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
MATH0033
1. Consider the numerical solution of f (x) = 0 where f (x) = ex/2 - x - 1, which has a unique non-zero root α satisfying α e [2, 3].
(a) Show that it is possible to approximate the root α using the bisection method
with initial interval [2, 3], and calculate the number of iterations required for the absolute error to be smaller than 10-5 .
You may assume without proof that 24/3 < e < 3 .
(b) Let φ(x) = 2 log (x + 1) for x > -1.
Show that x = φ(x) is consistent with f (x) = 0, and, using an appropriate the- orem from lectures, prove that the fixed point iteration xk+1 = φ(xk ) converges linearly to α for all initial guesses x0 e [2, 3], and satisfies an error bound
Ixk - αI ≤ Ck Ix0 - αI, k = 0, 1, 2, . . . ,
where 0 < Ck < 1 is a k-dependent constant you should specify.
(c) Let ϕ : R → R be continuously differentiable and let β e R be a fixed point of
ϕ s.t. Iϕ\ (β)I > 1. Prove that the iteration xk+1 = ϕ(xk ), k = 0, 1, 2, . . ., does not converge to β for any initial guess x0 e R, unless xk = β for some k . Hint: argue by contradiction, using the mean value theorem.
2. Consider the stationary iterative method
xk+1 = Bxk + c, k = 0, 1, 2, . . . ,
for the numerical solution of the linear system
Ax = b.
Here A, B e Rnxn and x, b, c e Rn for some n e N.
(a) Given a vector norm l . l : Rn → R, write down the definition of the induced matrix norm l . l : Rnxn → R.
(b) Assuming the method is consistent, prove that if lBl < 1 then lx - xk l → 0
as k → o.
(c) What does it mean for the matrix A to be strictly diagonally dominant (SDD) by rows?
(d) Write down the definition of the iteration matrix B for the Jacobi method. Prove that if A is SDD then the Jacobi method converges.
You may use without proof the fact that the matrix norm induced by the vector norm lvlo := maxi=1,...,n Ivi I satisfies
n
lMlo = ia,.n() ↓ Imij I, M = (mij ) e Rnxn .
j=1
(e) Now consider the system of nonlinear equations
f1 (x1 , x2 ) = x1(2) - x2 = 0,
f2 (x1 , x2 ) = x2(2) + x1 - 1 = 0,
which can be written as f(x) = 0 with f = (f1 , f2 )T and x = (x1 , x2 )T . Write down the Newton iteration for the solution of the system f(x) = 0. Give a sufficient condition on the initial guess x(0) guaranteeing that the first Newton step can be computed using a Jacobi iteration.
One root of the system lies in the quadrant x1 > 0, x2 > 0. Does the Newton iteration converge linearly or quadratically to this root (for a sufficiently good initial guess)?
3. (a) Consider the linear system Mx = b where M = ╱ 1(3) µ(2) 、,
µ 0 is a real parameter, and b e R2 .
Write down the Jacobi and the Gauss-Seidel methods for the iterative solution of the linear system in the form Pxk+1 = Nxk + b where P and N are matrices you should specify. Without computing the iteration matrices, give a sufficient condition on the parameter µ so that the Jacobi and the Gauss-Seidel methods are both convergent.
(b) Compute the iteration matrices BJ and BGS for the Jacobi and the Gauss-
Seidel methods respectively. Using information from the iteration matrices, determine a necessary and sufficient condition on µ for the Jacobi and Gauss- Seidel methods to be convergent. Which method is expected to converge faster?
(c) Let A be a symmetric positive definite matrix. Consider the gradient method for the solution of Ax = b, defined by xk+1 = xk + αk rk , where rk = b - Axk and αk is the step length. Prove that the optimal choice of αk , minimising lek+1lA , where ek = x - xk and lvlA(2) := vT Av = (Av, v), is
lrk l2(2)
αk =
Hint: First show that Aek = rk and rk+1 = rk - αk Ark .
4. For the solution of the Cauchy problem
y(0) = y0 ,
consider the following family of numerical methods, parametrized by 0 ≤ θ ≤ 1:
(*)
(a) For which θ is the method explicit?
(b) State the values of θ for which (*) coincides with the forward Euler, backward
Euler and Crank-Nicolson methods respectively.
(c) State the definition of truncation error for a general one-step method. Show that the method (*) is of second order if θ = 1/2, and of first order otherwise. State clearly the smoothness assumptions under which your analysis is valid.
(d) State the definition of absolute stability for a general one-step method. Prove that the method (*) is unconditionally absolutely stable if θ > 1/2 and condi- tionally absolutely stable if θ < 1/2. In the latter case, determine a necessary and sufficient condition on h for the method to be absolutely stable.
5. (a) Define the 2-norm condition number K2 (A) of an invertible matrix A e Rnxn . Show that if Ax = b with b 0 then for any e Rn ,
l - xl2 lA - bl2
lxl2 lbl2 .
(b) Write down a definition involving eigenvalues of what it means for a matrix A e Rnxn to be symmetric positive definite (SPD).
Prove that if A is SPD then A is invertible, A-1 is SPD, and
λmax
K2 (A) =
where λmax and λmin are the maximum and minimum eigenvalues of A.
(c) Consider the second order boundary value problem
where µ > 0 and f is continuous on [0, 1], and the finite difference method
! + µun = fn , n = 1, . . . , N - 1,
where N e N, h = 1/N , xn = nh, fn = f (xn ) and un is the approximation of y(xn ), for each n = 0, . . . , N .
Write the method as an (N - 1)-by-(N - 1) linear system of the form Au = f , specifying the system matrix A and the vectors u and f .
(d) In the special case µ = 0 the eigenvalues of A are given by the formula λn = ╱ 1 + cos ╱ 、、, n = 1, . . . , N - 1.
Use this fact to prove that
K2 (A) = + O(1), N → o.
In light of (a), what does this tell you about the usefulness of a residual-based stopping criterion when solving the system Au = f using an iterative method, when N is large?
2023-05-28