1. Maximum likelihood estimation¶
First of all, we try to obtain maximum likelihood estimator.
$$\begin{eqnarray}p(X|\lambda) &=& \prod^N_np(x_n|\lambda)\\ &=&\prod^N_n \frac{\lambda^x}{x_n!}e^{-\lambda}\end{eqnarray}$$In frequetionist setting, we can estimate a value for $\lambda$ by maximizing the likelihood function. It's equivalent to maximizing the logarithm of likelihood.
$$log p(X|\lambda) = (\sum^N_n x_n)\log\lambda -N\lambda -\sum^N_n \log x_n!$$So as to obtain maximum likelihood estimator, we can take derivative with respect to $\lambda$ equal to $zero$.
$$\begin{eqnarray}\frac{d \log p(X|\lambda)}{d\lambda} &=& \frac{\sum^N_n x_n}{\lambda} &=& 0 \\ \lambda_{ML} &=& \frac{\sum^N_n x_n}{N}\end{eqnarray}$$It seems maximum likelihood estimator is mean of observed value.
Next, we'll see implementation of maximum likelihood estimation.
As an observed data, we will "Wine quality data set". You can refer here. Since I personally prefer red wine over white wine, red wine is gonna be used.
import pandas as pd
import numpy as np
from scipy.stats import gamma
from scipy.stats import poisson
from scipy.stats import nbinom
import matplotlib.pyplot as plt
% matplotlib inline
red_wine = pd.read_csv('./winequality/winequality-red.csv',
delimiter=';')
df_quility = pd.DataFrame(red_wine.quality.value_counts().sort_index())
df_quility.columns = ['number']
df_quility.index.name = ['quality']
df_quility.head()
Take a look at observed data with bar graph.
# plot observed value
plt.bar(df_quility.index,df_quility.number.values)
plt.title('Observed value',fontsize=14)
plt.xlabel('quality',fontsize=12)
plt.ylabel('occurance',fontsize=12)
plt.xlim((0,20))
plt.ylim((0,800))
plt.show()
Next we will compute the value of parameter $\lambda$ of poisson distribution with maximum likelihood estimator we calculated.
# create observed data from red_wine data
obs_data =[]
for qual, num in zip(df_quility.index, df_quility.number):
temp_list = [qual for i in range(num)]
obs_data.extend(temp_list)
# compute maximum likelihood estimator
ml_val = np.array(obs_data).mean()
print('Maximum likelihood result : ',ml_val)
Now we can see the result of maximul liklihood estimation in a graph.
x = np.linspace(0,20,21)
y = poisson.pmf(x,ml_val)
# plot result of maximum liklihood estimation
plt.bar(x,y)
plt.title('maximum likelihood estimation',fontsize=14)
plt.ylim((0,0.20))
plt.xlim((0,20))
plt.show()
It furnish better understanding to compare the sample from maximum likelihood estimation and observed value.
plt.figure(figsize=(13,5))
# plot observed value
plt.subplot(1,2,1)
plt.bar(df_quility.index,df_quility.number.values)
plt.title('Observed value',fontsize=14)
plt.xlabel('quality',fontsize=12)
plt.ylabel('occurance',fontsize=12)
plt.xlim((0,20))
plt.ylim((0,700))
# The number of observation is 1599
plt.subplot(1,2,2)
sample_ml = poisson.rvs(ml_val,size=1599)
binn = np.linspace(sample_ml.min(),sample_ml.max(),
sample_ml.max()-sample_ml.min() +1)
occur_ml = [(sample_ml == abinn).sum() for abinn in binn]
plt.bar(binn,occur_ml)
plt.title('Sample from maximum likelihood estimator',
fontsize=14)
plt.xlim((0,20))
plt.ylim((0,700))
plt.show()
2. Predictive model¶
Next we're gonna look at predictive model with bayesian inference. Predictive model can be given as following,
$$\begin{eqnarray}p(x_*|X) &=& \int p(x_*,\lambda|X)d\lambda \\ &=& \int \frac{p(x_*|\lambda)p(X|\lambda)p(\lambda)}{p(X)}\\ &=& \int p(x_*|\lambda)p(\lambda|X)d\lambda\end{eqnarray}$$Calculatiino process will be omitted here, as a result, predictive model can be captured as negative binomnal distribution which takes form,
$$p(x_*) = NB(x_*|r,p)\ \ where\ \ r=a,\ \ p=\frac{b}{b+1}\tag{1}$$First of all, posterior distribution should be formed with observed data. This process was described in Bayesian inference (Poisson distribution) ①
x_pr = np.linspace(1,40,101)
y_pr = gamma.pdf(x_pr,a=4,scale=1/0.2)
plt.plot(x_pr,y_pr)
plt.title('prior distribution',fontsize = 15)
plt.show()
def train_gamma(data, hyper_a=3, hyper_b=0.2):
"""
Return parameter of posterior distribution
"""
hat_a = data.sum() + hyper_a
hat_b = data.shape[0] + hyper_b
return hat_a, hat_b
hat_a,hat_b = train_gamma(np.array(obs_data))
y_posterior = gamma.pdf(x_pr,a=hat_a,scale=1/hat_b)
# plot posterior distribution
plt.plot(x_pr,y_posterior)
plt.title('posterior distribution',fontsize=15)
plt.show()
Now we can comupte predictive model with result of posterior distribution and equality (1).
r = hat_a
p = hat_b/(1+ hat_b)
y_predict = nbinom.pmf(x,r,p)
plt.bar(x,y_predict)
plt.title('baysian inference',fontsize=14)
plt.ylim((0,0.20))
plt.show()
As a last, we will compare observed value and sample from maximum likelihood estimator and predictive model of baysian inference.
plt.figure(figsize=(13,10))
# plot observed value
plt.subplot(2,2,1)
plt.bar(df_quility.index,df_quility.number.values)
plt.title('Observed value',fontsize=14)
plt.xlabel('quality',fontsize=12)
plt.ylabel('occurance',fontsize=12)
plt.xlim((0,20))
plt.ylim((0,700))
# The number of observation is 1599
plt.subplot(2,2,2)
sample_ml = poisson.rvs(ml_val,size=1599)
binn = np.linspace(sample_ml.min(),sample_ml.max(),
sample_ml.max()-sample_ml.min() +1)
occur_ml = [(sample_ml == abinn).sum() for abinn in binn]
plt.bar(binn,occur_ml)
plt.title('Sample from maximum likelihood estimator',
fontsize=14)
plt.xlim((0,20))
plt.ylim((0,700))
# The number of observation is 1599
plt.subplot(2,2,3)
sample_predict = nbinom.rvs(r,p,size=1599)
binn = np.linspace(sample_ml.min(),sample_predict.max(),
sample_predict.max()-sample_predict.min() +1)
occur_ml = [(sample_ml == abinn).sum() for abinn in binn]
plt.bar(binn,occur_ml)
plt.title('Sample from Predictive model of bayesian inference',
fontsize=14)
plt.xlim((0,20))
plt.ylim((0,700))
plt.show()