Opposite to frequentist inference, Bayesian inference uses Bayes’ Theorem to update the probability for a hypothesis as more evidence or information becomes available.

(Mimsy, 2017)

As we have more evidence, evidence weights more in the posterior than prior.

In the updating process, Baysian translate prior distribution to posterior distribution through likelihood function. As we have more evidence, they

Bayes’ Theorem

  1. Bayes’ Theorem
  2. Bayes Theorem (general)
  3. Bayes’ Theorem (odds)

Usage

Example

Coin flip: Prior & Posterior Distributions

from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
colors = ['red','darkorange','gold','forestgreen','royalblue','blueviolet']
 
# function to plot the POSTERIOR distributions
def posterior_plot(alpha,beta,flips):
    x = np.linspace(0, 1, 1000)
    num_rows = len(flips)
    posteriors = []
    fig, axes = plt.subplots(num_rows+1,figsize=(num_rows+1,4.5*num_rows))
    for i, flip in enumerate(flips):
        heads = flip[0]
        tails = flip[1]
        ax=axes[i]
        posteriors.append(stats.beta.pdf(x, heads+alpha, tails+beta))
        ax.plot(x, posteriors[i],linewidth=3,color=colors[i])
        ax.set_xlabel('p',style='italic')
        ax.set_ylabel('Frequency')
        if i==0:
            ax.set_title("Prior",fontweight='bold');
        else:
            ax.set_title("Posterior after {} heads, {} tails".format(heads, tails),fontweight='bold');
    ax=axes[num_rows]
    for i in range(num_rows):
        ax.plot(x, posteriors[i],linewidth=3,color=colors[i])
        ax.set_title("All",fontweight='bold');
        ax.set_xlabel('p',style='italic')
        ax.set_ylabel('Frequency')
    fig.subplots_adjust(hspace=.6)