MATH 126A: Stochastic Processes - Spring 2023 Exercises
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
MATH 126A: Stochastic Processes - Spring 2023
Markov Chains - Exercises
Exercise 1. Back to your pet mouse
Consider the pet mouse problem in homework 2 (Exercise 2).
(a) Find the communication classes. Is this Markov chain irreducible?
Solution:The Markov chain is irreducible, as I can find a cycle going to all states: the mouse can go from room A to room B to room C and back to A .
(b) Find the period of room A. Is the Markov chain aperiodic?
Solution:The Markov chain is aperiodic . Indeed, I can go from A to A in 2 steps (A to B to A) or 3 steps (A to B to C to A), so the period, which is the gcd of all numbers of steps, has to be 1 .
(c) After a long time, what is the probability that you find the mouse in room A?
Solution:As an irreducible aperiodic chain, the distribution of the positions converges to the invariant measure . Based on the result of homework 2, P[Xn = A] → .
Exercise 2. Newspaper piling up:
Consider Exercise 3 or homework 2. Is the chain irreducible and aperiodic?
Solution: Yes it is . It is irreducible because state 0 communicates with all states quite obvi- ously. In particular, we can find a cycle going through all states 0 → 1 → 2 → 3 → 4 → 0 . The chain is aperiodic because 0 can return to 0 in 1 step, so the period, which is the gcd of all numbers of steps for 0 to return to itself, has to be 1 .
Exercise 3. A factory constructs mugs that are sold to a souvenir shop.
● The factory constructs a fixed amount of mugs per day (A mugs)
● The store orders a random number of units each day, noted Dn for day n. Dn are independent random variables with distribution:
P[Dn = k] = ρ(k)
with ρ(k) = 1 and ρ(k) > 0 for all k .
● If the store order exceeds the available mug stock Xn at the factory, the store sends all the mugs they have available that day, and the order is considered completed (the factory will not complete the order with mugs manufactured the following day, you can think e.g. that the store orders other mugs elsewhere to complete their daily needs).
● Moreover, the factory has a limited capacity and can only store up to 1 , 000 mugs; if the stock exceeds the storage capacity, extra units are discarded.
(a) Show that the stock volume Xn at night n constitutes a finite-state time-homogeneous
Markov chain.
Solution:The probability distribution of the number of mugs in the storage at night n + 1 only depends on the previous stock volume at night n, since it will be equal to min(1000, max(0, Xn+A - Dn)) where the order volumes Dn are independent identically distributed random variables, so independent of stock number and of n .
(b) If there are Xn = i mugs in the storage room at night on day n, what are the possible numbers of mugs in the storage room at night on day n + 1? Compute the transition probability matrix.
[For convenience, you may use the maps ρ¯(n) = nρ(k) = P[Dn > n] and ρ˜(n) = k(n)=0 ρ(k) = P[Dn < n] .]
Solution:I have provided a general formula above, but let’s unpack it now. Conditionally on Xn = i, we have:
0
.
Xn+1 = i + A - Dn
.
(1000
if i + A - Dn < 0
if 0 < i + A - Dn < 1000
if i + A - Dn > 1000
Let us now compute the probability of each of these possible transitions:
P[Xn+1 = 0|Xn = i] = P[i + A - Dn < 0] = P[Dn > i + A] = ρ¯(i + A).
P[Xn+1 = 1000|Xn = i] = P[i+A -Dn > 1000] = P[Dn < i+A - 1000] = ρ˜(i+A - 1000).
P[Xn+1 = j|Xn = i] = P[i + A - Dn = j] = P[Dn = i + A - j] = ρ(i + A - j).
Note of caution: we note that reaching a stock of 1000 is impossible if we have i + A < 1000 (i. e ., if there has never been more than 1000 mugs in the factory before the order), so one may think that we need to specify separately P [i, 1000] = 0 for all i < 1000 - A . While it is correct to specify this, it is not necessary since in that case i + A - 1000 < 0 and we directly have a zero probability, since by definition, ρ˜(Z) = P[Dn < Z] = 0 for any Z < 0 since Dn > 0 with probability 1 .
(c) Is the Markov chain irreducible? Is it aperiodic?
Solution:The Markov chain is irreducible because I can create a cycle going through all the states . Indeed, with probability p = ρ(A - 1), the stock increases by 1 unit every night. With probability p, we thus go from X0 = 0 to X1 = 0+A - (A - 1) = 1, and then with the same probability p, from X1 = 1 to X2 = 2, then to X3 = 3 with probability p, and we can keep going until we reach X1000 = 1000, from where the Markov chain can return to 0 immediately with the positive probability ρ(1000) .
It is aperiodic because any state can return to itself with positive probability ρ(A) .
Exercise 4. Consider the random walk on the graph below. Find the communication classes and their
period.
Solution:It is irreducible because we can find a cycle from the red circle back to itself going through all elements, by going from that circle to the blue circle above it. This will lead the Markov chain to go through all elements of the upper and lower loop with probability 1 before returning to the red circle .
For the period, let’s consider again the red element. When going to the left, it returns to itself in 6 steps with probability 1 . When going up, it returns to itself in 10 steps with probability 1 . The gcd of 6 and 10 is 2. Now any number of steps needed to return to the red circle has to be an even number, because any loop from red to red will need to include a certain number (say, p) of upward loops (10 steps) and a certain number (q) of leftward loop (6 steps), so the number of steps to return to 0 can always be written 10p + 6q and is thus always an even number, so the period is 2.
Alternatively, we can see that if the red circle is 0 and if we number the blue circles according to their order on the longer loop, we can create the periodicity classes C0 = {0, 2, 4, 6, 8} and C1 = {1, 3, 5, 7, 9}, and it is clear that these are periodicity classes since P[Xn+1 e C1|X0 e C0] = 1 and P[Xn+1 e C0|X0 e C1] = 1 .
2023-05-12