I'll take a shot at trying to answer this one. Before moving on, I guess you need to know that your initial state (at time 0) is Cape Town.

**Let me try guide you through it step-by-step:**

1. Firstly, draw your Markov chain. You have two states, Cape Town (call this 1) and London (call this 2), and note that you are given transition probabilities (and you can form a transition probability matrix, which I call \( \mathbf{P}\)):

Pr(London @ time t| London @ time t-1) = \( 1 - \alpha \)

Pr(CT @ time t| London @ time t-1) = \( \alpha \)

Pr(CT @ time t| CT @ time t-1) = \(1 - \beta\);

Pr(London @ time t| CT @ time t-1) = \(\beta\)

You can set \( \mathbf{P}\) as \( \begin{bmatrix}1 - \beta & \beta \\ \alpha & 1 - \alpha \end{bmatrix} \)

2. Secondly, consider what has been given in part i). You are given a component of your **marginal distribution** at time n, that is, Pr(CT @ time n | CT @ time 0). You can find also find Pr(London @ time n | CT @ time 0) by noting that Pr(London @ time n | CT @ time 0) = 1 - Pr(CT @ time n | CT @ time 0). The question asks you to show that this is true. You can either build this up by considering \( (1,0) \times \mathbf{P}^n \), which would be some rather tedious work, or prove it by induction, which would be much easier (and is, in fact, often called for in Prof. MacDonald's BUS3024S questions!). The crux of the induction proof will be to assume that the result is true for time n, and then prove that it is true for n+1 i.e.

Assume:

(Pr(London @ time n | CT @ time 0), Pr(CT @ time n | CT @ time 0)) =

\( \frac{1}{\alpha + \beta}\left[ \alpha + \beta ( 1- \alpha - \beta)^n, \beta - \beta ( 1- \alpha - \beta)^n\right]\),

and then find

\( \frac{1}{\alpha + \beta}\left[ \alpha + \beta ( 1- \alpha - \beta)^n, \beta - \beta ( 1- \alpha - \beta)^n\right] \times \mathbf{P}\),

and go on to show that the first entry in your matrix is precisely \( \frac{1}{\alpha + \beta} \left(\alpha + \beta ( 1- \alpha - \beta)^{n+1}\right)\). This should solve the first part of the question.

3. You are actually now in a position to answer part iii). All it is asking for is (part of) the long-run stationary distribution \( \mathbf{\pi} := [\pi_1, \pi_2]\), which you can find by letting \(n \to \infty \) in

\( \frac{1}{\alpha + \beta}\left[ \alpha + \beta ( 1- \alpha - \beta)^n, \beta - \beta ( 1- \alpha - \beta)^n\right]\).

4. Finally, how would you answer part ii)? In my opinion it is a lot of work, but there may very well be a simpler way! I have never known STA3041F to do the autocorrelation function for (discrete time) Markov Chains, but I guess it may be in your best interest to see how it's done. Let \(X_n\) denote the state the process currently is in - and set the state space to be \( \{1, 2\} \), with Cape Town being represented by 1 and London represented by 2. The autocorrelation (or serial correlation, remember from chapter 12?) function is specified by

$$ \rho(X_n, X_{n+2}) = \frac{cov(X_n, X_{n+2})}{\sqrt{var(X_n)var(X_{n+2})}} $$.

The term in the numerator can be worked out by using \( \mathbb{E}[X_n X_{n+2}] - \mathbb{E}[X_n] \mathbb{E}[X_{n+2}] \). This will require you to know that (think about why the following statement is true - hint: conditional probabilities!):

$$ \mathbb{E}[X_n X_{n+2}] = \sum_{i=1}^2 i\mathbb{E}[X_{n+2} |X_n = i ] \pi_i$$.

The denominator can be worked out in exactly the same way, and using the fact that the variance for such a process is constant for all time points. I leave this challenge to you to figure out!

I hope this helps, and that I have made no typos!

Hi Mario, Thank you for a really detailed response. Why is it not necessary to show that this holds true for the base case of the induction proof? (n=1)