MA6629 – Advanced Stochastic Analysis in Finance Midterm exam – solutions
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
MA6629 – Advanced Stochastic Analysis in Finance
Midterm exam – solutions
In the whole exam, we consider a standard Brownian Motion (wt)t≥0, and we denote by (rt)t≥0 the associated filtration.
You may use (without proving it) the following formula for the Gaussian distribution. For x ~
N(A, g2) (A e R, g > 0), E[eX ] = eµ+ .
1. (24 points) In this exercise, we consider the random variable x = (w1 )2 + (w3 )2 _ 2(w4 )2 .
(i) (4 points) Compute the expectation of x .
From the linearity property of the expectation, we obtain
E[x] = E[(w1 )2 + (w3 )2 _ 2(w4 )2]
= E[(w1 )2] + E[(w3 )2] _ 2E[(w4 )2].
Since w1 = w1 _ w0 ~ N(0, 1) (property (ii) in the definition of Brownian Motion), we have E[w1] = 0 and Var(x1 ) = 1. Hence, E[(w1 )2] = Var(x1 ) + (E[w1])2 = 1, and similarly, E[(w3 )2] = 3 and E[(w4 )2] = 4. We deduce
E[x] = 1 + 3 _ 2 . 4 = _4.
(ii) (4 points) Let Z ~ N(0, 1). Compute E[Z4] by using successive integrations by parts. From an integration by parts, we obtain
E[Z4] = 1 +/ a4 e-z2 /2da
,2u -/
= 尸 _ a3 e-z2 /2尸/(/) + - 3a2 e-z2 /2da
= 0 + 3E[Z2] = 3E[Z2] = 3,
using that E[Z2] = 1 (since Z ~ N(0, 1)).
(iii) (8 points) Compute the variance of x .
We can write, using the fact that the covariance is bilinear, i.e. linear in each of the two coordinates:
Var(x) = Var((w1 )2 + (w3 )2 _ 2(w4 )2 )
= Cov((w1 )2 + (w3 )2 _ 2(w4 )2 , (w1 )2 + (w3 )2 _ 2(w4 )2 )
= Cov((w1 )2 , (w1 )2 ) + Cov((w3 )2 , (w3 )2 ) + 4Cov((w4 )2 , (w4 )2 ) + 2Cov((w1 )2 , (w3 )2 ) _ 4Cov((w1 )2 , (w4 )2 ) _ 4Cov((w3 )2 , (w4 )2 ).
We now derive a general formula for Cov((ws)2 , (wt)2 ), for any 0 s s s t. For this purpose, we make appear independent increments:
Cov((ws)2 , (wt)2 ) = Cov((ws)2 , (ws + (wt _ ws))2 )
= Cov((ws)2 , (ws)2 ) + 2Cov((ws)2 , ws(wt _ ws)) + Cov((ws)2 , (wt _ ws)2 ).
We have immediately Cov((ws)2 , (wt _ ws)2 ) = 0 from the independence of the incre- ments ws and wt _ ws (property (iii) in the definition of Brownian Motion), and
Cov((ws)2 , ws(wt _ ws)) = E[(ws)2 . ws(wt _ ws)] _ E[(ws)2] . E[ws(wt _ ws)] = E[(ws)3] . E[wt _ ws] _ E[(ws)2] . E[ws] . E[wt _ ws] = 0,
where we used again the independence of ws and wt _ ws (second equality), and then that E[wt _ ws] = 0 since wt _ ws ~ N(0, t _ s) (property (ii) in the definition of Brownian Motion). Hence,
Cov((ws)2 , (wt)2 ) = Cov((ws)2 , (ws)2 )
= E[(ws)4] _ (E[(ws)2])2 .
We know that ws ~ N(0, s), so E[(ws)2] = s, and for Z ~ N(0, 1),
E[(ws)4] = E[(,sZ)4] = s2E[Z4] = 3s2
(using the previous question in the last equality). We deduce that
Cov((ws)2 , (wt) ) = 3s22 _ s2 = 2s2 .
Finally, using this property 6 times, we obtain
Var(x) = 2 . 12 + 2 . 32 + 4 . 2 . 42 + 2 . 2 . 12 _ 4 . 2 . 12 _ 4 . 2 . 32 = 72.
As a conclusion, we want to emphasize that using bilinearity of the covariance, and deriving the general formula for Cov((ws)2 , (wt)2 ), allowed us to avoid repeating several times the same reasoning with independent increments of Brownian Motion. However, it would also be perfectly valid to proceed as follows. First,
Var(x) = E[x2] _ (E[x])2
by definition of the variance. We have found in (i) that E[x] = _4, and there remains to compute E[x2]. We have
E[x2] = E[((w1 )2 + (w3 )2 _ 2(w4 )2 )2]
= E[(w1 )4] + E[(w3 )4] + 4E[(w4 )4]
+ 2E[(w1 )2 (w3 )2] _ 4E[(w1 )2 (w4 )2] _ 4E[(w3 )2 (w4 )2] (by expanding the square, and using linearity of the expectation).
On the one hand, we already know that E[(ws)4] = 3s2 , with s = 1, 3, 4. We then need to compute E[(w1 )2 (w3 )2], E[(w1 )2 (w4 )2] and E[(w3 )2 (w4 )2]. This can be done in a similar way as before, by making appear independent increments. From w3 = w1 + (w3 _ w1 ), we find
E[(w1 )2 (w3 )2] = E[(w1 )4] + 2E[(w1 )3]E[w3 _ w1] + E[(w1 )2]E[(w3 _ w1 )2] = 3 . 12 + 0 + 1 . 2 = 5,
and for the same reasons, E[(w1 )2 (w4 )2] = E[(w1 )4] + E[(w1 )2]E[(w4 _ w1 )2] = 6 and E[(w3 )2 (w4 )2] = E[(w3 )4] + E[(w3 )2]E[(w4 _ w3 )2] = 30. This allows us to conclude.
(iv) (8 points) Determine the conditional expectation E[xIr2] .
We have
E[xIr2] = E[(w1 )2 + (w3 )2 _ 2(w4 )2 Ir2]
= (w1 )2 + E[(w3 )2 Ir2] _ 2E[(w4 )2 Ir2],
using that (w1 )2 is r2-measurable. On the one hand,
E[(w3 )2 Ir2] = E[(w2 + (w3 _ w2 ))2 Ir2]
= (w2 )2 + 2w2E[w3 _ w2 Ir2] + E[(w3 _ w2 )2 Ir2] = (w2 )2 + 2w2E[w3 _ w2] + E[(w3 _ w2 )2],
where we used that w2 is r2-measurable on the second line, and then that w3 _ w2 is independent of r2 on the third line. Since w3 _ w2 ~ N(0, 1), we have E[w3 _ w2] = 0 and E[(w3 _ w2 )2] = 1. Hence,
E[(w3 )2 Ir2] = (w2 )2 + 2w2 . 0 + 1 = (w2 )2 + 1.
For similar reasons,
E[(w4 )2 Ir2] = (w2 )2 +2w2E[w4 _w2]+E[(w4 _w2 )2] = (w2 )2 +2w2 .0+2 = (w2 )2 +2. We finally obtain
E[xIr2] = (w1 )2 + ((w2 )2 + 1) _ 2((w2 )2 + 2) = (w1 )2 _ (w2 )2 _ 3.
2. (16 points) Consider two standard Brownian Motions x = (xt)t≥0 and x> = (x )t≥0t(>) . We assume that x and x> are independent. We also let yt = xt _ xt(>) , t > 0 .
(i) (4 points) For any s, t > 0, compute the distribution of xt _ xs(>) .
We know that xt ~ N(0, t) and xs(>) ~ N(0, s), and they are independent (since the whole processes x and x> are independent by assumption). The property about a sum
of independent Gaussian distributed random variables implies that
xt _ xs(>) ~ N(0, t + s).
(ii) (4 points) Let s > 0. For t > 0, what is the distribution of yα .t ?
From the previous question (with s = t),
yt = xt _ xt(>) ~ N(0, t + t) = N(0, 2t).
Hence,
yα .t ~ N(0, 2st).
(iii) (8 points) Show that there exists a unique constant s > 0 such that (Zt)t≥0 = (yα .t)t≥0 is a standard Brownian Motion (i.e. find s explicitly, and prove that for this value, (yα .t)t≥0 is indeed a standard Brownian Motion).
If (Zt)t≥0 is a standard Brownian Motion, we must have in particular Z1 ~ N(0, 1). Using (ii),
Z1 = yα .1 ~ N(0, 2s).
Hence, Z1 ~ N(0, 1) iff 2s = 1, i.e. s = . Hence, there exists a unique possible value of s for which (Zt)t≥0 is a standard Brownian Motion: necessarily, s = .
Conversely, we need to check the four properties in the definition of standard Brownian Motion for this particular value s0 = .
(i) We have
Z0 = yα0 .0 = y0 = x0 _ x0(>) = 0,
since x0 = x0(>) = 0 (x and x> both start from 0, as they are standard Brownian Motions).
(ii) For all 0 s s s t,
Zt _ Zs = yα0 .t _ yα0 .s = (xα0 .t _ xα0 .s) + (x0 .t _ x0 .s).
We know that these two increments of x and x> , respectively, satisfy xα0 .t _ xα0 .s ~ N(0, s0 .t_s0 .s) = N(0, s0 . (t_s)), and, similarly, x0 .t _x0 .s ~ N(0, s0 . (t _ s)). Since these Gaussian distributed random variables are independent, their sum has distribution
Zt _ Zs ~ N(0, s0 . (t _ s) + s0 . (t _ s)) = N(0, 2s0 . (t _ s)) = N(0, t _ s),
using the value s0 = .
(iii) For all 0 s s s t, Zt _ Zs = (xα0 .t _ xα0 .s) + (x0 .t _ x0 .s) is independent of rs(Z) = g(Zr : 0 s r s s). Indeed, denote by rX and rX! the filtrations associated with x and x> , respectively: xα0 .t _ xα0 .s is independent of both rα0(X) .s (using that x is a Brownian Motion, so it has independent increments) and r (since x is independent of x> ), and the same holds true for x0 .t _ x0 .s . On the other hand, each Zr , 0 s r s s, is measurable with respect to the filtration generated by rα0(X) .s u rα0(X)! s .
(iv) Clearly, (yt)t≥0 has continuous trajectories (as the difference of x and x> , which are standard Brownian Motions, hence have continuous trajectories), and so (Zt)t≥0 as well.
Since (Zt)t≥0 indeed satisfies these four properties, we can conclude that it is a standard
Brownian Motion.
3. (20 points)
(i) (10 points) Compute d(s2 . ws) by applying the product rule. From the product rule,
d(s2 . ws) = d(x . y)s = dxs . ys + xs . dys + dxs . dys , with xs = s2 and ys = ws. We have dxs = 2sds and dys = dws, so
dxs . dys = 2sds . dws = 0
(using ds . dws = 0). Hence,
d(s2 . ws) = 2swsds + s2 dws .
(ii) (10 points) Determine the distribution of 0(t)(s . ws)ds .
By integrating the relation found in (i), we obtain
t t
t2 wt = t2 wt _ 02w0 = 2 swsds + s2 dws .
0 0
Hence,
0t swsds = ╱t2 wt _ 0t s2 dws、 = ╱t2 0t dws _ 0t s2 dws、 = 0t (t2 _ s2 )dws .
From the It¯o isometry,
0t (t2 _ s2 )dws ~ N ╱0, 0t (t2 _ s2 )2 ds、.
We then compute:
t t t t
(t2 _ s2 )2 ds = t4 ds _ 2 . t2 s2 ds + s4 ds
0 0 0 0
= t4 . t _ 2t2 . +
= ╱ 1 _ + 、t5 = t5 .
Hence,
0t swsds = 0t (t2 _ s2 )dws ~ N ╱0, . t5 、= N ╱0, t5、.
4. (20 points) Recall that for any real number y, |y| denotes its integer part, i.e. the largest integer which is s y. Hence, for example: |3.27| = |3.99| = |3| = 3 .
(i) (10 points) Determine the distribution of
5
|s|dws .
0
Let Hs = |s|: it is a step function, = 0 on [0, 1), = 1 on [1, 2), = 2 on [2, 3), = 3 on [3, 4), and = 4 on [4, 5). Hence,
5
|s|dws = 0 . (w1 _ w0 ) + 1 . (w2 _ w1 ) + 2 . (w3 _ w2 ) + 3 . (w4 _ w3 ) + 4 . (w5 _ w4 ).
0
The increments (w2 _ w1 ), (w3 _ w2 ), (w4 _ w3 ) and (w5 _ w4 ) are independent (property (iii) in the definition of Brownian Motion), each N(0, 1) distributed (from property (ii)). Combining this with the result about a sum of independent Gaussian distributed random variables, we obtain
5
|s|dws ~ N(0, 12 . 1 + 22 . 1 + 32 . 1 + 42 . 1) = N(0, 30).
0
Note that we could also use directly the It¯o isometry, with the deterministic function Hs:
0 5 |s|dws ~ N ╱0, 0 5 ╱|s|、2 ds、,
and we have, using that the integrand ╱|s|、2 is a piecewise constant function,
5
╱|s|、2 ds = 1 . 12 + 1 . 22 + 1 . 32 + 1 . 42 = 30.
0
(ii) (10 points) Compute
14
cos ╱wo|sl、dws .
0
We observe that Hs(>) = cos ╱wo|sl 、is a step function: it is constant over the invervals [0, 1) (over which it is equal to Hs(>) = cos(w0 )), [1, 4) (Hs(>) = cos(w1 )), [4, 9) (Hs(>) = cos(w2 )), and [9, 14] (Hs(>) = cos(w3 )). Using the expression for the stochastic integral in the case of a step function, we obtain
14
Hs(>)dws = cos(w0 ) . (w1 _ w0 ) + cos(w1 ) . (w4 _ w1 ) + cos(w2 ) . (w9 _ w4 )
0
+ cos(w3 ) . (w14 _ w9 )
= w1 + cos(w1 ) . (w4 _ w1 ) + cos(w2 ) . (w9 _ w4 )
+ cos(w3 ) . (w14 _ w9 )
(using that w0 = 0, since (wt)t≥0 is a standard Brownian Motion).
5. (40 points) Let a e R and g > 0 be given. For all t > 0, let yt = at + gwt .
(i) (4 points) Compute dyt (in terms of dt and dwt). We have immediately
dyt = adt + gdwt .
(ii) (4 points) For all t > 0, compute E[eYt] .
Since wt ~ N(0, t), we have yt = at + gwt ~ N(at, g2t). Hence, using the formula recalled above,
E[eYt] = eat+σ 2 t .
(iii) (4 points) For all t > 0, let Zt = yteYt, and u(t) = E[Zt]. Compute u(0) . Since (wt)t≥0 is a standard Brownian Motion, we have w0 = 0. Hence,
y0 = a . 0 + g . w0 = 0,
so
Z0 = y0 eY0 = 0.
This implies
u(0) = E[Z0] = 0.
(iv) (6 points) Compute dZt (in terms of dt and dwt).
We use It¯o’s formula. We have Zt = f (yt), with f (z) = zey , f> (z) = (z + 1)ey and f>> (z) = (z + 2)ey. Hence,
Zt = f> (yt)dyt + f>> (yt)(dyt)2 = (yt + 1)eYtdyt + (yt + 2)eYt (dyt)2 .
We have dyt = adt + gdwt, and thus (dyt)2 = (adt + gdwt)2 = g2 dt, from which we can obtain
dZt = (yt + 1)eYt(adt + gdwt) + (yt + 2)eYtg2 dt = ╱╱a + g2、yt + a + g2、eYtdt + g(yt + 1)eYtdwt .
(v) (8 points) Find an ordinary differential equation (ODE) satisfied by u(t) . By integrating,
Zt = Z0 + 0t ╱╱a + g2、ys + a + g2、eYsds +
t
g(ys + 1)eYsdws .
0
We then take the expectation of both sides to make appear u(.), and use the general property that the expectation of a stochastic integral is equal to zero:
u(t) = u(0) + E ┌ 0t ╱╱a + g2、ys + a + g2、eYsds┐ + E ┌ 0t g(ys + 1)eYsdws┐
= ╱a + g2 、 0t E[yseYs]ds + (a + g2 ) 0t E[eYs]ds
= ╱a + g2 、 0t u(s)ds + (a + g2 ) 0t e(a+σ 2 )sds
(for the last equality, we used the result of (ii)). By differentiating this relation with respect to t, we find the ODE:
u> (t) = ╱a + g2、u(t) + (a + g2 )e(a+σ 2 )t , with u(0) = 0.
(vi) (6 points) Determine u(t) for all t > 0 (hint: look for a solution to the ODE of the form u(t) = eλtv(t), for some well-chosen a).
We look for a solution of the form u(t) = eλtv(t) suggested by the hint, for some a that
we will need to determine. We have
u> (t) = aeλtv(t) + eλtv> (t),
so the ODE becomes
aeλtv(t) + eλtv> (t) = ╱a + g2、eλtv(t) + (a + g2 )e(a+σ 2 )t .
This suggests to take a = a + g2 , which we do, so that the terms in eλtv(t) on both sides cancel out each other. Hence, the ODE gets simplified to
eλtv (t) = (a + g )e>2(a+σ 2 )t
and thus
v> (t) = a + g2 .
By integrating it, we find
v(t) = (a + g2 )t + C,
for some constant C e R. Hence,
u(t) = eλtv(t) = e(a+σ 2 )t ╱(a + g2 )t + C、.
For t = 0, this yields u(0) = C. Since we have found earlier that u(0) = 0, we can deduce that C = 0, which finally gives
u(t) = eλtv(t) = e(a+σ 2 )t(a + g2 )t.
(vii) (8 points) We consider the Black-Scholes model with drift A, volatility g > 0, and initial price s0 . The interest rate is r. For T > 0, compute E[f(sT )], where f(y) = y lny . Recall that in the Black-Scholes model, st = s0 e(µ - σ 2 )t+σWt (t > 0). Hence, we can write it as
st = s0 eYt ,
where yt = at + gwt, with a = A _ g2 . We have
E ┌f(sT )┐ = E ┌sT ln sT ┐
= E ┌s0 eYT ╱ln s0 + yT、┐
= s0 ╱(ln s0 )E ┌eYT┐+ E ┌yT eYT┐、.
From (ii),
E ┌eYT┐ = e(a+σ 2 )T = eµT ,
and from (vi),
E ┌yT eYT┐ = u(T) = e(a+ σ 2 )T(a + g2 )T = eµT ╱A + g2、T.
Hence,
E ┌f(sT )┐ = s0 eµT ╱ ln s0 + ╱A + g2、T、.
2022-04-18