Metropolis algorithm

sampling from a distribution when you don’t know exactly what the distribution is, and you’re working in a high dimensional space

The Metropolis–Hastings algorithm can draw samples from any probability distribution with probability density P(x), provided that we know a function f(x) proportional to the density P and the values of f(x) can be calculated. The requirement that f(x) must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because calculating the necessary normalization factor is often extremely difficult in practice.

The Metropolis–Hastings algorithm works by generating a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively, with the distribution of the next sample being dependent only on the current sample value (thus making the sequence of samples into a Markov chain). Specifically, at each iteration, the algorithm picks a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted (in which case the candidate value is used in the next iteration) or rejected (in which case the candidate value is discarded, and current value is reused in the next iteration)—the probability of acceptance is determined by comparing the values of the function f(x) of the current and candidate sample values with respect to the desired distribution.


uid: 202006050950 tags: #algorithms


Date
February 22, 2023