Welcome to the hotseat. We've prepared a guide if you'd like to read more about how it works.

Efficient way of finding a probability: NCD policy

0 votes
37 views
asked Apr 18 in BUS 3018F - Models by shuri (490 points)

image


Hey guys. My question is from ActEd Page 14 - 16 of Chapter 3. 

Given this matrix, they suggest an efficient way of working out the probability above. 

It reads: "Since we know that the distribution at time n is (1,0,0), we can calculate the probability distribution at time n+1 by post-multiplying the vector (1,0,0) by the transition matrix P." They further multiply the resultant vector with P and then again with P to get the final answer. 


Please explain the intuition behind this method and why we multiply with vector (1,0,0). 

1 Answer

+1 vote
answered Apr 20 by Murray (660 points)
selected Apr 22 by shuri
 
Best answer

The idea behind this method stems from the interesting properties of matrix multiplication.To help, I have given a task to do and then I explain part of it to help you through the understanding behind why this works so handily.

The Task

Look at the different states and write out all the possible ways of moving into state 0 (having started in state 0 at time 0). One possible way would be 0 -> 0 -> 0 (or starting in state 0 and remaining there through two time steps). Once you have all the possible ways of going from state 0 to state 0 in two steps, work out the probabilities of that occurring. So for our example, we start in state 0 with a probability of 1, in one transition there is a 0,25 probability of staying in state 0, and in the second transition it is also a 0,25 probability we stay in state 0 (for the same reasons as the first transition probability).

Next step is to look at the matrices and see their result from multiplying them out and you will see that when we multiply out the brackets, it conveniently handles all possible probability paths for us without us having to worry if we forgot a possible way of getting to state 0. Remember that this is small scale, but the results here hold for all matrices and scale of models which is why it is so useful and convenient to use matrices. 

Some intuition along the way

Our handy vector (1,0,0) is our starting point. So at time point n, we know for certain that we are in state 0 (which makes sense as everyone enters the policy with a 0% discount at whatever time point n they join). You will see that defining it like this allows for the removal of the possibilities of going from state 1 to state 0 in one transition (which, thinking about it, wouldn't make sense for a person to enter a contract with 25% discount for not claiming so there is no way that that situation could occur, which is neatly handled by the matrix).

We now have our first transition with this matrix P. This really neat matrix is very powerful as it means you can draw many different conclusions from it (that's another story). This matrix P shows us the possible ways that a person can move between the states in single step transitions. For example, the entry in row 1, column 2 tells us the probability of moving from state 0 into state 1 in one step whilst entry in row 2, column 1 tells us the probability of moving from state 1 to state 0 in one step.

These definitions are important because this matrix P does not specify which state we are currently in! Only where we could go in the next time step. This is why, when you multiply your vector (1,0,0) (telling us which state we are in, by our probability matrix P (telling us that no matter where we start, it will show the probability of ending in a specific state), we see that the probabilities of ending up in state 0,1 or 2 pop out the other side. If you can't see why, do the exercise I set in the first two paragraphs and it should be clearer.

Just to take the first step, notice that the first column of the P matrix shows all the possible ways of ending up in state 0. So, logically, we must multiply the state the person is by the probability of moving from that state into state 0. This is exactly how the matrix multiplication is worked out and why it ends up with a probability of ending up in state 0 at time n+1.

The rest is a simple extension of this principle. We now have a new matrix (I worked out to be (0.25,0.75,0) which tells us the probabilities of being in state x at time n+1, having started in state 0 at time n. It's important to see that multiplying this by P, it essentially does the same thing as before only we are now starting in state 0 with a probability of 0.25 and in state 1 with a probability of 0.75.

...