3 - Markov Chains#
We say that \(\{ Y_n \}_{n=0,1,2,\dots}\) is a discrete-time Markov process (Markov chain) if for every possible state \(j, i, i_{n-1}, \dots, i_0\) we can assume
\[ P(Y_{n+1} = j | Y_n = i, Y_{n-1} = i_{n-1}, \dots, Y_0 = i_0) = P(Y_{n+1} = j | Y_n = i) \]
whenever the conditional probabilities on each side are well defined. In other words, the history is irrelevant for predicting the next state.
3.1 - Natural Consequence of the Definition of a Markov Chain#
3.2 - Temporal Homogeneity of a Markov Chain#
3.3 - One-step Transition Probability Matrix#
3.4 - Basic Computations Based on One-step Transition Probabilities#
3.5 - Computing the Distribution of States After \(n\) Steps#
This is covered in section 1.4 of the textbook.
3.6 - Multistep Transition Probabilities#
This is covered in section 1.2 of the textbook.