Opposite to frequentist inference, Bayesian inference uses Bayes’ Theorem to update the probability for a hypothesis as more evidence or information becomes available.
As we have more evidence, evidence weights more in the posterior than prior.
In the updating process, Baysian translate prior distribution to posterior distribution through likelihood function. As we have more evidence, they
- If we leave prior unknown or uniform, posterior distribution will have the same distribution as likelihood function
- If we set a prior distribution, posterior distribution will be influenced by the prior
Bayes’ Theorem
Usage
- correct inverse fallacy and base rate fallacy
Example
Coin flip: Prior & Posterior Distributions
from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
colors = ['red','darkorange','gold','forestgreen','royalblue','blueviolet']
# function to plot the POSTERIOR distributions
def posterior_plot(alpha,beta,flips):
x = np.linspace(0, 1, 1000)
num_rows = len(flips)
posteriors = []
fig, axes = plt.subplots(num_rows+1,figsize=(num_rows+1,4.5*num_rows))
for i, flip in enumerate(flips):
heads = flip[0]
tails = flip[1]
ax=axes[i]
posteriors.append(stats.beta.pdf(x, heads+alpha, tails+beta))
ax.plot(x, posteriors[i],linewidth=3,color=colors[i])
ax.set_xlabel('p',style='italic')
ax.set_ylabel('Frequency')
if i==0:
ax.set_title("Prior",fontweight='bold');
else:
ax.set_title("Posterior after {} heads, {} tails".format(heads, tails),fontweight='bold');
ax=axes[num_rows]
for i in range(num_rows):
ax.plot(x, posteriors[i],linewidth=3,color=colors[i])
ax.set_title("All",fontweight='bold');
ax.set_xlabel('p',style='italic')
ax.set_ylabel('Frequency')
fig.subplots_adjust(hspace=.6)