Exploring The Metropolis Algorithm For Monte Carlo Simulations In Python

Introduction to the Metropolis Algorithm

The Metropolis Algorithm is a Monte Carlo method used for approximating a given function or probability distribution. This algorithm, also known as the Metropolis-Hastings algorithm, is particularly useful in cases where the distribution cannot be estimated directly, such as with continuous-time systems or complex real-world scenarios.

In this blog post, we will explore the basics of the Metropolis Algorithm and how to implement it in Python for a simple example.

Sampling Using the Metropolis Algorithm

To better understand the Metropolis Algorithm, let's consider a simple example: estimating the probability distribution of a set of numbers picked from a normal distribution. The goal is to generate samples from this distribution without having access to the underlying normal distribution function.

The algorithm consists of the following steps:

  1. Initialize a starting point x0 in the given state space.
  2. Choose a new point x1 randomly in the vicinity of x0.
  3. Calculate the acceptance ratio A = min(1, P(x1) / P(x0)).
  4. Generate a random number r from a uniform distribution on the interval [0, 1].
  5. If r <= A, accept the new point x1, otherwise stay with the current point x0.
  6. Repeat steps 2-5 until the desired number of samples has been generated.

Implementing the Metropolis Algorithm in Python

Now, let's implement the Metropolis Algorithm in Python using NumPy and Matplotlib to visualize the results.

import numpy as np import matplotlib.pyplot as plt # Probability distribution function of a normal distribution def pdf(x, mean=0, std_dev=1): return np.exp(-(x - mean)**2 / (2 * std_dev**2)) / (std_dev * np.sqrt(2 * np.pi)) # Metropolis Algorithm def metropolis_algorithm(pdf, initial_state, steps=1000): samples = [initial_state] for _ in range(steps): x0 = samples[-1] x1 = x0 + np.random.normal(0, 1) # Random candidate point acceptance_ratio = min(1, pdf(x1) / pdf(x0)) if np.random.random() <= acceptance_ratio: samples.append(x1) else: samples.append(x0) return samples # Running the algorithm initial_state = 0 steps = 5000 samples = metropolis_algorithm(pdf, initial_state, steps) # Visualizing the results x = np.linspace(-5, 5, 1000) true_dist = [pdf(xi) for xi in x] plt.hist(samples, density=True, bins=50, alpha=0.6, label='Metropolis') plt.plot(x, true_dist, label='True Distribution', color='red') plt.xlabel('x') plt.ylabel('Probability Density') plt.legend() plt.show()

The code above demonstrates a simple implementation of the Metropolis Algorithm for generating samples from a normal distribution. The generated samples are then plotted as a histogram, along with the true probability density function for comparison. The results reveal that the samples generated using the Metropolis Algorithm closely follow the normal distribution.

This Python implementation can be adapted to estimate various distributions or study more complex scenarios. Keep in mind that, for more challenging problems, the choice of the proposal distribution and initial parameters may need further optimization.

Conclusion

The Metropolis Algorithm is a powerful technique for approximating probability distributions when the underlying function is difficult or impossible to obtain directly. This basic Python implementation helps demonstrate the core concepts of this Monte Carlo method and can be the starting point for applying the Metropolis Algorithm to an extensive range of practical problems.