Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

Math 193a, HANDOUT FOR HOMEWORK

Some Answers and Solutions

Here you willnd some solutions and answers to problems answers to which are not given in the book.

1. Chapter 1:

12. The only graphs which are not given in Section 3.1.3, are the graph of lnx and the parabola 2ax x2 .

18. (a) The utility function u(x) is concave, so the customer is a risk averter. (b) The function is convex, and the customer is a risk lover.

(c) Let y = wG. We have E{u(wξ)} = 0.9 · u(100)+0.05 · u(50)+0.05 · u(0) = 0.9 · 50+0.05 · 50 · =








(e) In this case, ^300 = E{^300+H ξ} = 0(100) ^300+H xdx = [(300+H)3/2 (200+H)3/2].



the function from (a), so the answer is the same.





22. The preference order.

26. All functions under consideration are concave except Case 1: the function u(x) = xα is concave only for α ≤ 1.

27. The function is convex, so the person is a risk lover. When comparing two r.v.’s, X and Y , one should verify the inequality E{ea w(+X)} ≥ E{ea w(+Y)}. The term eaw cancels, and the inequality is equivalent to E{eaX } ≥ E{eaY }.

30. (a-i) They do not differ since u1 (x) is a linear transformation of u2 (x).

(a-ii) Since E{u1 (X)} = −2E{u2 (X)}+3, the preference orders are opposite: if X Y for John, then X Y


(b) Mary is more risk averse since u2 (x) is “more concave”(for example, you may graph the functions u1 (x) and u2 (x)). Certainly, such a reasoning is heuristic, and since we did not consider in class what“more concave”means rigorously, students are suggested to take it at a heuristic level.


A rigorous approach is connected with the notion of relative risk aversion (see Section 3.4.3). In our case,

for u(x) = xα , the relative risk aversion coefcient Rr = 1 α . Hence, the less α is, the larger is the relative


38. Comparing the two distributions under consideration, we see that it looks like we took” a probability mass of 0.2 from mass 0.5, and moved 0. 1 to the right and 0. 1 to the left. In this case, the dispersion of the distribution gets larger, and our intuition tells us that in the risk aversion case, the distribution should get “worse” .

Let us justify this in the EUM case.   Let u be a utility function, and let X1  and X2  be r.v.’s with the

respective distributions.   We have E{u(X1)} = 0. 1u(1) + 0.2u(2) + 0.5u(3) + 0.2u(4) and E{u(X2)} =

0. 1u(1) + 0.3u(2) + 0.3u(3) + 0.3u(4).  Then E{u(X1)} − E{u(X2)} = −0. 1u(2) + 0.2u(3) − 0. 1u(4) =


39. This exercise is a generalization of Exercise 38.  Let u be a utility function, a r.v. X1 has the original distribution, and X2  has the transformed distribution.  Then E{u(X1)} = ku(xk)pk .  The corresponding

representation E{u(X2)} is the same except the terms with k = i− 1, i, and i+1. Consequently, the difference


of u(x).

Note that, for a particular i, all xs do not need to be equally spaced but rather we should have xi+1 xi =



λ = m/2, so d = m ln2 ≈ 0.69m.

, and λ = d(2)m(1 )dx = m(1 )2 . Since λ = , we have d = 2(1)m

In the rst case, the deductible is somewhat larger. It is not surprising, since the exponential r.v. has a greater dispersion and may assume large values.


2. Chapter 2:

1. (a) The student should look over Table (0.2.6.1).

(b) Since (1. 1.3) is the particular case of (1. 1. 10), it suffices to consider the Γ-density. Let a r.v. X1ν have the density f1ν(x) = Γ ) xν1 e x for x ≥ 0, and = 0 otherwise. Consider the r.v. Xaν = X1ν . In accordance with (0.2.6.1), Xaν has the density Γν(ν)) xν1 e ax for x ≥ 0, and = 0 otherwise. This is the density faν(x)

from (1. 1. 10). We have arrived at it just changing the scale.

5. (a) If X has the d.f. F1(x) = xγ/(1+ xγ), then by virtue of (0.2.6. 1), the r.v. Y = θX has the d.f. F(x) = (x/θ)γ/(1+(x/θ)γ).

7. FY(x) = P(X x/k) = 1 q+qFξ(x/k); E{Y} = kqµ, and Var{Y} = k2 (qv2 +q(1 q)µ2 ).

19. (b) Solution: Note that fS2 (xy) = xy if 0 xy 1, which is equivalent to x 1 y x, and


Keeping this in mind, we have for x ≤ 1,

fS3 (x) = \0x (xy)dy = x2 = .

For 1 ≤ x ≤ 2,

fS3 (x) = \0x−1 (2 − x+y)dy+\x 1 (xy)dy = [−2x2 +6x−3]/2.

For 2 ≤ x ≤ 3,

fS3 (x) = \x 2 (2 − x+y)dy = [x2 − 6x+9]/2.

Otherwise, fS3 (x) = 0.

21. Since a is a common scale parameter, without loss of generality we may set a = 1. Since f 1ν(xy) = 0

fS (x)    =    \0 f 1ν 1 (xy)f1ν2 (y)dy = \0x e (xy)yν2 − 1eydy = xν1 +ν2 − 1ex \0x ( 1 − )ν 1 − 1 (  )ν2 − 1 dy.

By the variable change z = y/x,

fS (x)    = xν1 +ν2 − 1ex \0 1 (1 z)ν1 − 1 zν2 − 1dz

= B(ν1 , ν2 )f1(ν1 +ν2 )(x),

where the constant B(ν1 , ν2 ) = (1 − z)ν1 − 1 zν2 − 1dz. We know that fS (x) is a density, and hence fS (x)dx =



that B(ν1 , ν2 ) = . In Mathematics, the function B(· , ·) is called the Beta-function or the Euler

integral.

30. The parameter a = 1/m. Hence, in view of (0.4.3.5), Mξ(z) exists for z < 1/m. It is noteworthy that the inequality is strict.

33. The function in Fig.10a cannot be a m.g.f.  The function g(z) in Fig.10c is not a m.g.f.  either.  The function in Fig.10b looks as a m.g.f.

34. For Fig.11a, the corresponding r.v.’s have a negative mean , while for Fig.11b, it is positive.

35. M\(0) = E{X} = m, M\\(0) = E{X2} = m2 + σ2 .

36. The means are the same, Var{X1} < Var{X2}.

47. (a)

θ 2.326^22295 ≈ 0. 105


48. (d)

θ 0.229

(f) θnew 0. 162.

Additional Problems:

(a) 0.3 e7z + 0.7e11z, 0.7 e7z + 0.3e11z , (0.3 e7z + 0.7e11z)(0.7 e7z + 0.3e11z)

(b) The Γ-distribution with a = 1/3 and ^ = 30.

3. Chapter 3:

2. If N n, thenMN(z) = ezn . Therefore, (1.5) implies MS(z) = MN(lnMX(z)) = exp{(lnMX(z))n} = MX(n)(z).

4. The number of claims with priority for separate insurances are the r.v.’s Xi = {1(0)   w(w)i(i)t(t)h(h) p(p)r(r)o(o)b(b)a(a)b(b)i(i)l(l)i(i)t(t)y(y) 1(e),ey , i = 1, 2.

The sum

8  0

S = X1 + X2= < 1

:  2

On the other hand, for the joint insurance, the number of claims is a Poisson r.v. with parameter 2y. So, the number of priority claims is the r.v.



8 0

S= < 1

:  2


with probability e 2y ,

with probability e2y 2y,

with probability 1 e2y e2y 2y.


We see that P(S = 0) = P(S= 0), while P(S = 1) > P(S= 1), since the inequality 2e(1y e)y > e2y 2y

As a matter of fact, the last conclusion is true for any particular distribution of X ’s. Indeed, for the corre- sponding events we have {S = 0} = { S= 0}, {S = 1} ⊃ { S= 1}, and hence {S = 2} ⊂ { S= 2}.

Thus, P(SS) = 1, P(S < S) > 0. So, for the same price, the joint insurance covers more claims. (It makes sense also to mention the first stochastic dominance.)

6. (a) y = 231 · 0.01 + 124 · 0.05+ 347 · 0.03 = 18.92, n = ≈ 0.02695.

12. (a) If ξ is the size of a damage and d is the deductible, then the number of claims is a Poisson r.v. with parameter = yP(ξ > d) = yead = 300e2/10 245.62. So, E{} = Var{} ≈ 245.62.

(b) P( 230) = k(2)ek /k!.  Since ^Var{} ≈ 15.67, the number 230 differs from E{} approxi- mately by the standard deviation, so the probability is not small.

(c) Since E{} is large, the distribution of is close to the corresponding normal distribution. Since the dif- ference E{} 230 is close to the standard deviation, P( ≤ 230) ≈ Φ(− 1) ≈ 0. 159. (More precisely, one


may write P( 230) = P( ( 245.62) (230 245.62)) P( ( 245.62) ≤ −0.997)


21. By (1. 1) and (1.3), for the total payment S, we have E{S} = 4 · 50 = 200, Var{S} = 2 · 50+ 16 · 102 =

1700.  If the distribution of S is closely approximated by the Γ-distribution with parameters a, ν, then = 200, and = 1700. Then a = ≈ 0. 118, and ν ≈ 23.53. Using Excel, we get that P(S > 250) ≈ 1 − Γ(250, 0. 118, 23.53) ≈ 0. 117.

28.  In the book and below, the symbol means “equal by distribution”, that is, the distributions of the corresponding r.v.’s are the same.

Let λ = E{N1} = E{N2}, and let N be a Poisson r.v. with parameter 2λ . Consider r.v.’s Yi mutually indepen- dent, independent of N, and assuming values ±1 with probabilities . Let W = Y1 + ... +YN, let 1 be the number of Y’s assuming the value 1, and 2 be the number of Y ’s assuming the value − 1.

(We may view this as the situation of two portfolios with the respective number of claims 1  and 2 . The size of all claims in the rst portfolio equals 1, while in the second, it is − 1.)

By Proposition 10, 1 and 2 are independent Poisson r.v.’s with parameter λ, that is, the vector (1 , 2 )

(N1 , N2). On the other hand, W 1 2 , and hence W N1 N2 .

The m.g.f.

MN1N2 (z)    = MN1(z)MN2 (z) = exp{λ(ez 1)}exp{λ(ez 1)}

=    exp {λ (ez +ez 2)} .

Note that we could obtain the same result using the representation for MN1N2 (z) above and (1.6), writing

MW(z)    =   exp{2λ(MY(z) 1)} = exp {2λ ( ez + ez 1)} =   exp {λ (ez +ez 2) = MN1N2 (z). } .


29. Let N be a Poisson r.v. such that E{N} = 300. Consider r.v.’s Yi mutually independent, independent of

N, and assuming values  3 and 5 with respective probabilities and . Let W = Y1 + ... +YN, let 1 be the

number of Ys assuming the value 3, and 2 be the number of Y s assuming the value 5.

(We may view this as the situation of two portfolios with the respective numbers of claims 1  and 2 . The size of all claims in the rst portfolio equals 3, and in the second it equals −5.)

By Proposition 10, 1  and 2  are independent Poisson r.v.’s with parameters λ 1  =3(1) 300 = 100 and λ2  =

200 = 200, respectively.  Then, the vector (1 , 2 ) (N1 ,N2).  On the other hand, it is clear that W 31 − 52 , and hence W 3N1 − 5N2 .

33. (a) N has the Poisson distribution with λ = λ 1 +λ2= 10 · 12 +10 · 22 = 50. Thus, E{N} = Var{N} = 50. Using software we get P(N ≤ 50) ≈ 0.5375.

(b) Since p1 = , in accordance with (3.2.7), P(N1  < 11|N = 50) = B(10, , 50), where B(x; p, n) is the binomial d.f. with parameters p, n. Excel gives 0.58356.


(c) This is the compound Poisson distribution of a r.v. W = Y1 + ... +YN where N is Poisson with λ = 50, and the independent Y’s take values 100 and 300 with probabilities p1 = 0.2 and p2 = 0.8, respectively.

(d) E{S} = 50(100 · 0.2+ 300 · 0.8) = 13000; Var{S} = 50(1002 · 0.2+ 3002 · 0.8) = 3, 700, 000. In accor- dance with (1.6),

MS(z) = exp {50 (e100z + e300z − 1)} = exp {10e100z + 40e300z − 50 } .


34. (a) Since p1 = 0.2 and p2 = 0.8, we have S W , where W is defined as in Exercise 33c above with

8 100     with probability · = ,

Y = < 200 with probability · + · = ,

: 300   with probability 5(4) ·2(1)  =5(2) .


(b) E{S} = 50E{Y} = 50 · = ; Var{S} = 50E{Y2} = 50 · 58000 = 29 · 105 .

(c) This follows from (3. 1. 13) where N1 , N2 , N3  are the numbers of Y’s which assumed the values 100, 200, 300, respectively. Consequently, E{Ni} = E{N}pi . Namely, E{K1} = 50 · = , E{N2} = 50 · = , E{N3} = 50 · = 20.

35. In this case, S W , where W is defined as in Exercise 33c above with Y ’s having the density f (x) = p1f 1 (x) + p2f2 (x), where f1 (x), f2 (x) are the uniform densities on [100, 200] and [200, 300], respectively.

Hence,

8  0.2 · = 0.002     if x [100, 200],

f(x) = < 0.8 · = 0.008      if x [200, 300],

: 0                                  otherwise.

Then E{S} = E{N}(p m1 1 + p m22 ) = 50(0.2 · 150+ 0.8 · 250) = 11500, Var{S} = E{N}(E{Y2} = 50(0.2 · (1502 + 1002) + 0.8 · (2502 + 1002)) = 2766666. , and

MS(z) = exp{50(M(z) 1)} ,

where

M(z)    = p1M1(z) + p2M2(z) = 0.2 · e100z 0z(−) 1 + 0.8 · e200z 0z(−) 1 0z(−) 1 .


36. The problem is similar to Exercise 35.  In this case, S =

above with Y ’s having the density f (x) = w1f 1 (x)+ w2f2 (x), where f1 (x), f2 (x) are the uniform densities on [100, 300] and [200, 400], respectively. Hence,



0.2 · = 0.001

f (x) = = 0.005

>

: 0


if x ∈ [100, 200],

if x ∈ [200, 300],

if x ∈ [300, 400],

otherwise.


Then E{S} = E{N}(w m1 1 + w m22 ) = 50(0.2 · 200+ 0.8 · 300) = 14000, Var{S} = E{N}(E{Y2} = 50(0.2 ·

(2002 + 2002) + 0.8 · (3002 + 2002)) = 4166666. , and MS(z) = exp{50(M(z) − 1)}, where

M(z)    = w1M1(z) + w2M2(z) = 0.2 · e100z 0z(−) 1 + 0.8 · e200z 0z(−) 1 0z(−) 1 .


An additional problem: λ 1 λ2 , λ 1 (λ2 + λ2(2)).

4. Chapter 4:

1. (a) Let us measure time in hours.  Consider two intervals: ∆ 1 = [0, 0.5] and ∆2 = (0.5, 1].  Since an interarrival time cannot be larger than one hour, if there were no arrivals in ∆ 1 , then there should be an arrival in ∆2 ; that is, P(N∆2  = 0|N∆1  = 0) = 0. On the other hand, P(N∆2  = 0|N∆1  = 1) > 0 because if there was an arrival in ∆1 , the next arrival may occur after half an hour elapses.

(b) Similar to the above reasoning, P(N2 = 2|N1.5 = 2, N1 = 2) = 0 since the condition means that there was no arrivals in [1, 1.5].  On the other hand, P(N2 = 2|N1.5 = 2) > 0 because there may be no arrivals during half an hour. Thus, the process is not Markov. Then it is not the process with independent increments because processes of the latter type have the Markov property.

2. Assume that interarrival times are independent, and the expected value of the kth interarrival time equals

k. Consider two intervals: ∆ 1 = [0, 1] and ∆2 = (1, 2]. The probability P(N∆2  = 0|N∆1  = 1) should be much smaller than P(N∆2  = 0|N∆1  = 1000) because given that there were 1000 arrivals, the waiting time for the next arrival is 1001 times larger than the length of a unit interval, and the probability that there will be no arrival during a unit time interval is close to one.

Actually, we can state it rigorously if we observe that due to the memoryless property, P(N∆2  = 0|N∆1  =

k − 1) = Pk > 1), where τk is the kth interarrival time.  The last probability equals exp{−1/k} → 1 as


12. (a) Since Nt is the process with independent increments, Corr{N2 ,N4 N2} = 0.

(b)

E{NtNt+s}   = E{Nt(Nt+ Nt+s Nt} = E{Nt2 } + E{Nt(Nt+s Nt)}

= E{Nt2 } + E{Nt}E{(Nt+s Nt)}

=    (E{Nt})2 +Var{Nt} + E{Nt}E{(Nt+s Nt)}


Next,

Cov{Nt , Nt+s}   = E{NtNt+s} − E{Nt}E{Nt+s}

= λt + λ2t(t + s) λt · λ(t + s) = λt ,

and

Corr{Nt , Nt+s} = = = 4 .





(c) By the memoryless property, at the moment t, “everything starts over as from the beginning”, and we can think about the mth arrival after time t .  For a separate interarrival time τ, we have E{τ} =λ(1), and

Var{τ} = . Consequently,


and




13. By virtue of (2.2.6), we should give an example of a function λ(t) for which limtχ(t) = 1. Let, for instance, λ(t) = e t . Then χ(∞) = esds = 1.

Another example may be λ(t) = because = \0 ds = π/2.

16. (a) It is clearly zero. The beginning of the day is the end of the previous day. (b) At the middle of the day.

(c) E{N365} = Var{N365} = 0(365) | sinπt|dt = 365 (sinπt)dt = 7π(3)0 ≈ 232.484.

(d) E{N365} ≈ 232, which is large enough to apply normal approximation.  (See also the reasoning on p. 164.) So, the probability we consider is approximately equal to 2(1 − Φ(1)) ≈ 0.317.

17. Let time be measured in hours. If τ 1 = 1/2, then after the first arrival, during the next half an hour, the intensity of arrivals is not high, and the probability that there will be no arrivals during half an hour is not small. However, if τ1 = 1, then after the first arrival, the intensity will become very high, and the probability that there will be no arrivals during half an hour will be very small.

More precisely, P(τ2 > 0.5| τ1= 0.5) = P(N(0.5, 1] = 0) = exp {1 · } = , while P(τ2 > 0.5| τ1= 1) = P(N(1, 1.5] = 0) = exp{−100 · } = e 50 , which is a very small number.

19.  With respect to the original process of accidents, the process counting serious injuries is a marked

Poisson process with the intensity λp = 30 · 0. 1 = 3.   Hence, E{N30} = 90, and P(N30  > 100) = 1