Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

02610 Optimization and Data Fitting

Homework assignment 2

Hand in on DTU Learn before 6 November 11pm.  The overall page limit exclude Appendix is 6.

1    One-page report on the exercises for week 9 (40%)

In this one-page report, you should discuss the project that you did on the Powell’s problem. In your discussion, you should focus on the following questions:

1.  (5%) What makes the Powell’s problem difficult to solve?

2. Comparisons of Newton’s, Gauss-Newton, Levenberg-Marquardt.

(a)  (10%) What are the differences or similarities on these three methods? (b)  (5%) What are the convergence rates?

(c)  (10%) What are the advantages and limitations on each method?

3.  (10%) In the last method, we simply changed the variables, then what hap- pened when we called the L-M method? Why?

For the report, you don’t need the sections of introduction and conclusion. You can illustrate your points by the figures, formulas, tables... But the page limit is ONE.

2    Solving a linear least squares problem (20%)

In this exercise, we will compare to solve a linear least squares problem through the normal equation and the QR factorization.

Try to fit f(t) = sin(t) on the interval  [0,π] with polynomials of degree n with n = 1, 2, 3, ··· , 10. Here, we use equally spaced nodes.

1.  (10%) Set up the least squares problem

min   Ax − b∥2(2) .

x∈Rn+1

What are A, x, and b, respectively?  You can explain by using their mathe- matical formulas or by including your Matlab codes.

2. Solve the normal equation with backslash in Matlab.  Save the norm of the residual and the condition numbers of AT A (the system matrix in the normal equations) with n = 1, 2, 3, ··· , 10.

3. Solve the least squares problem with QR factorization introduced in the course. Save the norm of the residuals and the condition numbers of R (the system matrix in Rx = QT b that you need solve) with n = 1, 2, 3, ··· , 10.

4.  (10%) Make two figures with semilogy in Matlab to plot the norm of the residual and the condition numbers as a function of n, respectively. Note that in each figure you should have two plots according to both methods. Include the figures in your answer, and state your conclusion.

3    Linear least squares with weights (15%)

In this exercise, we determine the parameters in the fit function

ϕ(t) = x1 e 27t + x2 e 8t + x3

to fit a NMR signal. We know that the exact parameters x= [1.27, 2.04, 0.3]T . In addition, we know that the first 10 data points was added larger Gaussian noise with mean 0 and standard deviation 0.5, and the rest data points was added Gaussian noise with mean 0 and standard deviation 0.1.

1. Load the data in data exe3 2022 .mat. Compute the least squares fit without taking the difference in noise levels into account. What is the solutions of all three parameters? What is the 2-norm of the absolute error ∥e∥2 = ∥xx∥2 ?

2. Now, we take the difference of the noise levels into account and apply the linear weighted least squares fit. According to the standard deviations of the noise, how should we add weights to the problem, i.e, what is the weight matrix? Compute the weighted least squares solution and the 2-norm of the absolute error.

3. Compare the solutions without the weights and with the weights. Which one is more accurate?

4    Meyers problem (25%)

This exercise illustrates that normally the algorithms for solving the least squares problems work best when the problem is scaled so that all the nonzero components

of x are of the same order of magnitude.

Consider Meyer’s problem:

ri(x) = yix1 exp ( ) ,

with ti = 45 + 5i and

i = 1, ··· , 16,

i

yi

i

yi

i

yi

1

2

3

4

5

6

34780

28610

23650

19630

16370

13720

7

8

9

10

11

11540

9744

8261

7030

6005

12

13

14

15

16

5147

4427

3820

3307

2972

1.  (10%) Calculate the Jacobian J, and implement a Matlab function to return the residual r and J with given x, t and y . Please include this Matlab function in your answers.

2.  (5%) Call the Matlab function Levenberg Marquardt yq .m to apply the Levenberg- Marquardt method to solve the minimization problem

minf(x) = 1 ∥r(x)∥2(2) .

Set the starting point as x0 = [0.02, 4000, 250]T and the initial λ as ∥J(x0 )T J(x0 )∥2 . Plot f(xk) and ∥∇f(xk)∥2  as functions of the iteration number.  How many       iterations did the method need until meeting the default stopping criteria?

Now we use an alternative formulation, which is

ρi(x) = 103yiz1 exp (  13) ,        i = 1, ··· , 16,

with ui = 0.45 + 0.05i. The formulation corresponds to

z = [103e13x1 , 103x2 , 102x3]T .

3.  (5%) Calculate the Jacobian Js, and implement a Matlab function to return the residual ρ and Js  with given z , u and y .   Please include this Matlab function in your answers.

4.  (5%) Call the Matlab function Levenberg Marquardt yq .m to apply the Levenberg- Marquardt method to solve the minimization problem

minf(z) = 1 ∥ρ(z)∥2(2) .

Set the starting point as z0 = [8.85, 4, 2.5]T and the initial λ as ∥J(z0 )T J(z0 )∥2 . Plot f(zk) and ∥∇f(zk)∥2  as functions of the iteration number.  How many iterations did the method need until meeting the default stopping criteria now?