Understanding the Monte Carlo Markov Chain: A Key to Bayesian Inference

Charles Copley
3 min readJun 16, 2023

--

When it comes to advanced statistical analysis and probabilistic modeling, one of the most powerful techniques to emerge in recent years is the Markov Chain Monte Carlo (MCMC) method. Combining the concepts of Monte Carlo simulation and Markov chains, MCMC provides a practical way to perform Bayesian inference on complex probabilistic models.

So, what is MCMC exactly?

At its core, MCMC is a method for generating samples from complex probability distributions. This is critical in Bayesian inference, where we often need to calculate a posterior distribution — a distribution representing our updated beliefs about a parameter after considering new evidence.

The ‘Monte Carlo’ part of MCMC refers to the method of estimating the properties of a distribution by generating random samples. Monte Carlo methods are used broadly in physics, computational biology, and financial modeling, among other fields.

Meanwhile, the ‘Markov Chain’ component is a mathematical system that experiences transitions from one state to another according to certain probabilistic rules. These transitions are “memoryless,” meaning that the next state depends only on the current state and not on the sequence of events that preceded it.

The MCMC method uses a clever trick: it constructs a Markov Chain that has the desired target distribution as its equilibrium distribution. This means that, after running the chain for a certain number of steps (this process is often called ‘burn-in’), the states of the chain are samples from the target distribution.

A popular algorithm for MCMC is the Metropolis-Hastings algorithm. It starts at a random position, then ‘proposes’ moving to a new position based on a proposal distribution. If the new position is a better fit for the data (i.e., it has a higher likelihood), the algorithm always accepts the move. If the new position is a worse fit, the algorithm may still move there, but the probability of doing so is less than 1. This allows the algorithm to explore the entire parameter space, escaping local optima.

Here is an example of implementing the Metropolis-Hastings algorithm using numpy:

import numpy as np

# Function to calculate the likelihood
def likelihood(mu, data):
return np.prod(np.exp(- (data - mu)**2 / 2) / np.sqrt(2 * np.pi))

# Generate observed data
np.random.seed(123)
observed_data = np.random.normal(loc=5, scale=2, size=100)

# Initialize parameters
mu_current = 0 # initial guess for μ
sigma = 1 # known standard deviation
n_samples = 20000

# Store the samples
samples = []

for i in range(n_samples):
# Propose new candidate from a symmetrical distribution
mu_proposal = np.random.normal(loc=mu_current, scale=0.5)

# Calculate likelihoods
likelihood_current = likelihood(mu_current, observed_data)
likelihood_proposal = likelihood(mu_proposal, observed_data)

# Acceptance probability
p_accept = min(1, likelihood_proposal / likelihood_current)

# Accept proposal?
accept = np.random.rand() < p_accept

if accept:
# Update position
mu_current = mu_proposal

samples.append(mu_current)
import pandas as pd
pd.DataFrame(samples).tail(100).hist()

Notice that we only used the last 100 samples to plot the histogram. This is because the ‘burn-in’ that I referred to earlier, required for the Markov Chain to reach a stable state.

It’s worth mentioning that MCMC methods are not without their challenges. Convergence to the target distribution can sometimes be slow, and assessing whether the chain has run long enough (converged) can be tricky. However, numerous diagnostic tools and adaptive methods have been developed to address these issues.

In conclusion, Markov Chain Monte Carlo is a potent tool for modern statistical analysis, particularly in Bayesian inference. Its ability to generate samples from complex distributions, when direct sampling is difficult or impossible, makes it indispensable for a wide range of applications. Understanding and utilizing this method can significantly upskill your data analysis capabilities, so don’t shy away from delving deeper into the fascinating world of MCMC.

--

--

Charles Copley
Charles Copley

No responses yet