ECON61001 Final Exam 2021-22
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
ECON61001
SECTION A
1.(a) Consider the linear regression model
y = Xβ0 + u, (1)
where y is a 25 1 vector of observations on the dependent variable, X is a 25 5 data matrix of observations on the explanatory variables, β0 is a 5 1 vector of unknown regression coeficients and u is a 25 1 vector containing the error term. Suppose that X = (ι25 ,X2 ) where ι25 is a 25 1 vector of ones and X2 is a 254 matrix, and β0 = (β0,1 ,β0(/) ,2)/ where β0,1 is a scalar and β0,2 is a 4 1 vector. Let F be the F-statistic for testing H0 : β0,2 = 04 versus H0 : β0,2 04 where 04 is the 4 1 null vector. If the p-value for this test is 0.05 then what is the adjusted R2 , 2 , for the estimated version of (1)? Be sure to justify your answer carefully.
1.(b) Deine: (i) the size of a test; (ii) the critical region of a test.
1.(c) If a test is unbiased then what property does it possess? [5 marks] [2 marks]
[1 mark]
2.(a) Deine x = (1,w)/ where w is random variable with mean uw and variance aw(2) . Show that if E[xx/] = Q, a inite positive deinite matrix of constants, then aw(2) is a inite positive constant. [3 marks]
2.(b) Deine X to be a T k matrix of constants with rank(X) = k , Z to be a T q matrix
of constants with rank(Z) = q and rank(X/ Z) = k . Deine Pz = Z(Z/ Z) — 1 Z/ and
D = (X/ Pz X) — 1 — (X/X) — 1 .
Show that D is a positive semi-deinite matrix. [5 marks]
3. In their paper entitled “Does compulsory school attendance affect schooling and earnings?” published in the Quarterly Journal of Economics, Angrist & Krueger report estimates of an earnings function of the form
log(wi ) = edi * β0,1 + m βi(/) 0,2 + ui , (2)
based on 1980 US Census data for men born in years 1940-49 where: wi is the weekly wage earned by individual i; edi is the number of years of education of individual i; mi = (1,Ti,1,Ti,2 , . . . Ti,9)/ is a 101 vector with Ti,j a dummy variable that equals the value one if individual i is born in year 194j; and ui represents the error term. Their estimations are based on 486, 875 observations. In this question, you consider Instrumental Variables (IV) estimation of (2) based on the population moment condition
E[ziui (β0 )] = 0, (3)
where zi(/) = (ni(/),m )i(/) , ni(/) = (Di,1 * mi(/),Di,2 * mi(/),Di,3 * m )i(/) and Di,s is a dummy variable that takes the value one if individual i is born in the sth quarter of the year, and ui (β) = log(wi ) — x βi(/) , xi(/) = (edi ,m )i(/) and β = (β1 ,β2(/)).
(a) If zi satisies the orthogonality condition and the relevance condition then what does this imply about the relationship between log(wi ) and the quarter of birth of individual i that is, {Di,s }s(4)=1 . Word limit: 100 words [2 marks]
(b) Angrist & Krueger (1991) report βˆ1 = 0.0553 as the IV estimator of β0,1 . Using this result, calculate the estimated impact of an extra year of edu- cation on the weekly wage of an individual holding all other factors con- stant. [2 marks]
(c) Write down the irst-stage regression associated with the IV estimation of this model and explain how it can be used to assess whether or not zi is a valid instrument. [4 marks]
4. Consider the linear regression model
y = Xβ0 + u,
where y is a T 1 vector of observations on a time series, X is T k matrix of explanatory variables that are ixed in repeated samples with rank(X) = k, and u is a T 1 error vector with E[u] = 0 and Var[u] = with (i,j)th element i,j .
ut = at + at — 1 , (4)
where is a constant and {at } is a white noise sequence with variance a2 . Using (4) and the properties of a white noise sequence, derive i,j as a function
of a2 and for all i,j = 1, 2, . . . ,T . [8 marks]
5. Consider the linear regression model
y = Xβ0 + u,
where y and u are T 1 vectors, X is a T k matrix, and β0 is a k 1 vector of unknown regression coeficients. Assume that X is ixed in repeated samples with rank(X) = k, and u N ( 0,a0(2) ) where a0(2) is an unknown positive constant and is a speciied - that is, known - positive deinite matrix of constants. Show that the Maximum Likelihood estimator of β0 is the Generalized Least Squares
estimator of β0 . [8 marks]
SECTION B
6.(a) Consider the linear regression model
y = Xβ0 + u, (5)
where y is a T 1 vector with tth element yt , X is a T k matrix with tth row xt(/) , β0 is vector of regression coeficients with !th element β0,l, and u is the T 1 error vector with tth element ut . Assume that the following conditions hold: y is generated via (5); X is ixed in repeated samples with rank(X) = k, and u has a normal distribution with mean 0T , a T 1 vector of zeros, and variance a0(2)IT . Let βˆT be the OLS estimator of β0 that is, βˆT = (X X)/ — 1 X y; and let/ βˆT,l be the !th element of βˆT .
(i) Show that
βˆT,2 — βˆT,3 N (β0,2 — β0,3 , a0(2)(m2,2 + m3,3 — 2m2,3 ) ) ,
where mi,j is the (i,j)th element of (X/X) — 1 . [2 marks]
(ii) Under these assumptions, the null hypothesis H0 : β0,2 = β0,3 versus the alternative H1 : β0,2 β0,3 can be tested using the following decision rule:
reject H0 at the 100(1 — α)% signiicance level if SN > τN —k (1 — α/2)
where
图βˆN,2 — βˆN,3 图
图 dN 图
where dN(2) is the OLS-based estimator of the Var[βˆN,2 — βˆN,3] and τN —k (1 — α/2) is the 100(1 — α/2)th percentile of the Student’s t dis- tribution with T — k degrees of freedom.
Given that k = 3, βˆN = (1.136, 0.913, 0.952)/ , y/y = 328.071, N = 30,
(X/X) — 1 = 「(l) 0—00(.)0(0)5 —0—0(0)0(.)1(5) 0—00(.)0(0)1 , and X/ y = 「(l) ,
perform the test using a 5% signiicance level. Be sure to explain your cal- culations clearly. [8 marks]
(iii) Sketch the power curve for the test in part (a)(ii). Make sure you label the axes carefully. [5 marks]
6.(b) Let A be a m m nonsingular matrix, b be a m 1 vector and c be a scalar. Show
[A + cbb/] — 1 = A — 1 — A — 1 bb/A — 1 .
[6 marks]
6.(c) Consider the linear regression model in part (a). Let yT+1 and xT+1 be the T +1th observation on the dependent variable and the vector of explanatory variables respectively and deine
y˜ = [ yyT+1 ] , and = [ x ] .
Use part (b) to show that
βˆT+1 = βˆT + (X X)/ — 1 xT+1(yT+1 — xT(/)+1 βˆT ),
where βˆT is the OLS estimator of β0 based on (5) and βˆT+1 is the OLS estimator
of β0 based on
y˜ = β0 + ,
where is the error vector given by = (u/ ,uT+1)/ . [9 marks]
7. Consider the linear model
yi = xi(/)β0 + ui , (6)
where yi is a scalar observable variable, xi = (1,x2(/),i )/ is a k 1 vector of ob- servable variables, and ui is an unobservable scalar error term. In addition, let zi denote a q 1 vector of observable variables. Assume that {(ui ,xi(/),z )i(/) / ; i = 1, 2, . . . ,N} is a sequence of independently and identically distributed (i.i.d.) ran- dom vectors that satisfy: (i) equation (6) is the true model for yi ; (ii) E[ui |zi] = 0; (iii) Var[ui |zi] = a0(2), a positive, inite constant; (iv) E[zix ] =i(/) Qzx , a inite matrix of constants with rank equal to k; and (v) E[ziz ] =i(/) Qzz , a nonsingular inite matrix of
constants. Let be the IV estimator based on the population moment condition
E[ziui (β0 )] = 0, (7)
where ui (β) = yi xi(/)β that is,
= argminb∈B QIV , and QIV = u(β)/Z(Z/ Z) — 1 Z/ u(β),
where B is the parameter space, u(β) is the N 1 vector with ith element ui (β), Z is the N q matrix with ith row zi(/) .
Suppose that a researcher believes that β0 satisies the following set of linear restrictions Rβ0 = r where R is a nr k matrix of speciied constants with rank(R) = nr and r is nr 1 vector of speciied constants. As a consequence, she obtains the restricted IV estimator deined as
R = argminb∈BRQIV
where BR = {β;Rβ = r,β ∈ B}.
(a) Show that
R = (X/ Pz X) — 1 R/ ( R(X/ Pz X) — 1 R/ ) — 1 (R r),
where Pz = Z(Z/ Z) — 1 Z/ and you can assume that rank{X/ Z} = k and (Z/ Z) — 1 exists. Hint: you may quote the formula for from Lecture Notes without proof. [12 marks]
(b) Derive the probability limit of β˜R . Hint: for any sequence of i.i.d. random vectors/matrices {wi }, you may quote the generic form of the Weak Law of Large Numbers, N — 1 对 wi uw , but must verify uw for the speciic choices of wi relevant to your answer; you may also quote the generic form
of the Central Limit Theorem, N — 1/2 对(wi uw ) N(0, w ) but must
[12 marks]
(c) Demonstrate that R is only consistent for β0 if Rβ0 = r . [6 marks]
8. Consider the weakly stationary and weakly dependent process {vt } generated by the process
vt = a0vt — 1 + ut , (8)
where |a0 | < 1 and {ut }—∞ is an independently and identically distributed series with mean zero and variance a0(2) . Let Vs denote the sth autocovariance of vt for s = 0, 1, 2 ....
(a) Let aˆT denote the OLS estimator of a0 based on the model in (8) and a sample of observations {vt } . Show that
T1/2(aˆT - a0 ) N ( 0, 1 - a0(2) ) .
Hint: for any time series {wt } you may quote the generic form of the Weak Law of Large Numbers, T — 1 对 wt uw , but must verify uw for the speciic choices of wt relevant to your answer; you may also quote the generic form of the Central Limit Theorem, T — 1/2 对(wt - uw ) N(0, w ) where w = T0 + 对(Ti + T )i(/) and Tj = Cov[wt ,wt —j ] for j = 0, 1, 2 ... but must verify uw and {Tj } for the speciic choices of wt relevant to your answer. You may also quote the formula for the irst two moments of vt and the MA(∞)
representation for vt without proof. [15 marks]
(b) Suppose that it is desired to test H0 : a0 = a versus H1 : a0 a . Consider the decision rule:
Reject at the (approximate) 5% signiicance level if |ST | > 1.96
where
ST = T 1/2(aˆT - a )
^(1 - a ) .
Show that in the limit as T → ∞ the probability of a Type I error with this
decision rule is 0.05 under the conditions above. [3 marks]
(c) Suppose now that vt is strongly stationary and weakly dependent series generated by (8) with |a0 | < 1 and ut = ht wt where ht = ^α0 + α 1 vt(2)— 1 and {wt }—∞ is an independently and identically distributed series with mean equal to zero and variance equal to one. Is the decision rule in part (b) still valid in the sense that, in the limit as T → ∞ , the probability of a Type I error with this decision rule is 0.05? If not then explain briely why and how you would modify the decision rule. No formal derivations are required. Word limit: 100 words [6 marks]
8. (d) Suppose now that vt is generated by (8) with |a0 | < 1 and
ut = wt + wt — 1
where {wt }—∞ is an independently and identically distributed series with mean zero and variance aw(2), and is a constant. Is the decision rule in part (b) still valid in the sense that, in the limit as T → ∞ , the probability of a Type I error with this decision rule is 0.05? If not, then explain briely why and how you would modify the decision rule. No formal derivations are
required. Word limit: 100 words [6 marks]
9. Let {(yi ,x )}i(/) be a sequence of independently and identically distributed (i.i.d.) random vectors. Suppose that yi is a dummy variable and so has a sample space of {0, 1} and that P(yi = 1|xi ) = (x βi(/) 0 ) where β0 is a k 1 vector of unknown parameters and ( . ) denotes the cumulative distribution function of the standard normal distribution; that is,
(c) = \— (v)dv, where (v) = (^2T) — 1 e —v2 /2 .
You may assume that the probability distribution of xi does not depend on β0 . Let βˆ denote the maximum likelihood estimator of β0 .
(a) Show that the score equations associated with maximum likelihood estima- tion of this model take the form
之(N) xi〈( (xi/ βˆ ) ( yi (xi/ βˆ ) ) ,〉= 0,
where βˆ is the maximum likelihood estimator of β0 . [6 marks]
(b) Show that the information matrix is given by
Ib,N = N E 「(l) (xi/β0)2(xi/βˆ)(1(xi/βˆ)) xixi(/) l(」) .
Hint: use the Law of Iterated Expectations; also note that you do not need to obtain exact formulae for the following derivatives in order to obtain the
answer: ∂(x β)/∂βi(/) / or ∂{(x β)(1i(/) (x β))}/∂βi(/) / . [9 marks]
(c) Now suppose that xi = 1 for all i and so xi(/)β0 = β0 for all i. Derive the formulae for the Wald, Likelihood Ratio and Lagrange Multiplier statistics for testing H0 : β0 = 0 against H1 : β0 0 in this model. Express your formulae as functions of yi , (β) and (β) for the relevant value of β . [15 marks]
2022-12-17