Boolean Functions
These functions have a boolean output. They are fairly simple and tend to categorize Markov chains into various labels.
Contents
Index
DiscreteMarkovChains.is_absorbing
DiscreteMarkovChains.is_ergodic
DiscreteMarkovChains.is_regular
DiscreteMarkovChains.is_reversible
Documentation
DiscreteMarkovChains.is_regular
— Functionis_regular(x)
Definitions
A Markov chain is called a regular chain if some power of the transition matrix has only positive elements. This is equivalent to being ergodic and aperiodic.
Arguments
x
: some kind of discrete Markov chain.
Returns
true
if the Markov chain, x
, is regular.
Examples
We will set up a matrix with 2 communication classes and show that it is not regular.
using DiscreteMarkovChains
T = [
0 1 0;
1 0 0;
0 0 1;
]
X = DiscreteMarkovChain(T)
is_regular(X)
# output
false
Repeat the above but now all states communicate.
T = [
0.0 0.5 0.5;
0.0 0.0 1.0;
1.0 0.0 0.0;
]
X = DiscreteMarkovChain(T)
is_regular(X)
# output
true
Notice how a periodic chain is not regular even though there is only one communication class.
T = [
0 1 0;
0 0 1;
1 0 0;
]
X = DiscreteMarkovChain(T)
is_regular(X)
# output
false
DiscreteMarkovChains.is_ergodic
— Functionis_ergodic(x)
Definitions
A Markov chain is called an ergodic chain if it is possible to go from every state to every state (not necessarily in one move). That is, every state communicates and the whole tranistion matrix forms one communication class.
In many books, ergodic Markov chains are called irreducible.
If a Markov chain is not irreducible then it is reducible. This means that the Markov chain can be broken down into smaller irreducible chains that do not communicate. One chain might still be accessible from another though.
Arguments
x
: some kind of Markov chain.
Returns
true
if the Markov chain, x
, is ergodic. This is, if every state can be accessed from every other state. Another term for this is irreducible.
Examples
We will set up a matrix with 2 communication classes and show that it is not ergodic.
using DiscreteMarkovChains
T = [
0 1 0;
1 0 0;
0 0 1;
]
X = DiscreteMarkovChain(T)
is_ergodic(X)
# output
false
Repeat the above but now all states communicate.
T = [
0.0 0.5 0.5;
0.0 0.0 1.0;
1.0 0.0 0.0;
]
X = DiscreteMarkovChain(T)
is_ergodic(X)
# output
true
Notice how a periodic chain is regular no matter the periodicity.
T = [
0 1 0;
0 0 1;
1 0 0;
]
X = DiscreteMarkovChain(T)
is_ergodic(X)
# output
true
DiscreteMarkovChains.is_absorbing
— Functionis_absorbing(x)
Definitions
A Markov chain is absorbing if it has at least one absorbing state, and if from every state it is possible to go to an absorbing state (not necessarily in one step).
Arguments
x
: some kind of Markov chain.
Returns
true
if the Markov chain, x
, is an absorbing chain. So the process is guarenteed to be absorbed eventually.
Examples
The following is a typical example of an absorbing chain.
using DiscreteMarkovChains
T = [
1.0 0.0 0.0;
0.1 0.8 0.1;
0.0 0.3 0.7;
]
X = DiscreteMarkovChain(T)
is_absorbing(X)
# output
true
If a Markov chain does not have an absorbing state then the chain is not absorbing. The converse is not true as we will see soon.
T = [
0.5 0.5 0.0;
0.5 0.0 0.5;
0.0 0.5 0.5;
]
X = DiscreteMarkovChain(T)
is_absorbing(X)
# output
false
In the following, the chain has multiple absorbing states but the process is not guarenteed to be absorbed into those states.
T = [
1 0 0 0;
0 1 0 0;
0 0 0 1;
0 0 1 0;
]
X = DiscreteMarkovChain(T)
is_absorbing(X)
# output
false
References
DiscreteMarkovChains.is_reversible
— Functionis_reversible(x)
Definitions
A Markov chain is said to be reversible if it satisfies the equation $π_i p_{ij} = π_j p_{ji}$ where $π$ is the stationary distribution and $p$ are the elements of the transition matrix of the Markov chain.
Arguments
x
: some kind of Markov chain.
Returns
true
if the Markov chain is reversible.