Regression & Finite Mixture Models

I wrote a post a while back about Mixture Distributions and Model Comparisons. This post continues on that theme and tries to model multiple data generating processes into a single model. The code for this post is available at the github repository. There were many useful resources that helped me understand this model, and some are mentioned below and others are mentioned in the previous post in this topic:

I use the R library flexmix, for the sample data and fitting the model, in addition to Stan and MCMC approach. The data distribution is shown in the figure below, where looking at the figure suggests that there may be two data generation processes at work here. To start with, we try and find a distribution that will capture certain aspects of this data, and use posterior predictive checks to see if our distribution choice is appropriate. A previous post on posterior predictive checks covers this topic in more detail, and tests different distributions. To keep it short, I just try a normal mixture distribution with 2 mixture components and then perform some tests.

The figure in the left panel shows the density plot for the raw data, and the panel on the right shows the same data, with smattering of 200 predictive data sets from the fitted distribution – visually it appears to be fine, with the left component slightly higher in some cases. However the aspects of the data that I want to our distribution to capture are the mean, variance, maximum and minimum values, and the test result PValues suggest that any variation can be explained by sampling variability.

> ivTestQuantities
variance  minimum  maximum     mean 
   0.380    0.155    0.300    0.420 

The first steps are to fit a mixture regression model with 2 components and one predictor using the flexmix package and then to repeat this using Stan.

> fit.flex = flexmix(yn ~ x, data=NPreg, k=2)
> summary(fit.flex)

Call:
flexmix(formula = yn ~ x, data = NPreg, k = 2)

       prior size post>0 ratio
Comp.1 0.561  104    198 0.525
Comp.2 0.439   96    146 0.658

'log Lik.' -704.0635 (df=7)
AIC: 1422.127   BIC: 1445.215 

> ## fitted coefficients
> parameters(fit.flex)
                     Comp.1     Comp.2
coef.(Intercept) 31.7644433 -0.7685309
coef.x            0.1867837  5.1841287
sigma             7.7718438  3.3828795

The same model is now broken down into parts and fit using Stan – you can write a log posterior function in R as well to do this, but I find that the optimiser tends to not explore the parameter space if we track the mixture components as well. However if we keep the mixture components fixed, then it tends to converge to the same point as Stan.

> print(fit.stan, digi=3)
Inference for Stan model: fitNormalMixtureRegression.
1 chains, each with iter=4000; warmup=2000; thin=1; 
post-warmup draws per chain=2000, total post-warmup draws=2000.

                   mean se_mean    sd     2.5%      25%      50%      75%    97.5% n_eff  Rhat
sigma[1]          3.405   0.010 0.395    2.676    3.126    3.395    3.667    4.223  1448 1.002
sigma[2]          7.848   0.019 0.632    6.725    7.407    7.805    8.270    9.164  1142 1.000
iMixWeights[1]    0.439   0.001 0.046    0.350    0.406    0.438    0.472    0.530  1000 1.000
iMixWeights[2]    0.561   0.001 0.046    0.470    0.528    0.562    0.594    0.650  1000 1.000
betasMix1[1]      5.163   0.003 0.146    4.870    5.067    5.160    5.262    5.452  2000 1.001
betasMix2[1]      0.176   0.013 0.347   -0.501   -0.056    0.171    0.391    0.861   751 1.001
mu[1]            -0.686   0.016 0.713   -2.072   -1.164   -0.705   -0.228    0.750  2000 1.000
mu[2]            31.846   0.067 1.971   27.898   30.590   31.890   33.136   35.791   853 1.000
lp__           -708.099   0.063 1.885 -712.367 -709.142 -707.715 -706.712 -705.449   899 1.001

Samples were drawn using NUTS(diag_e) at Tue Jul  4 16:10:02 2017.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Comparing the results from the two procedures, i.e. flexmix and the Stan model, suggest a nice convergence between the two methods.

Fitted or Predicted Values

Once the coefficients of the regression equation have been determined, we can calculate the predicted (on new data) or fitted (current data) values. There are two ways to get this data: either get predicted data for each component; Or get an aggregate predicted value using each component.

> intercepts
[1] -0.6598333 31.5694812
> betas
[1] 5.1614796 0.2241089
> iMixWeights
[1] 0.44 0.56
> iPred1 = lStanData$X %*% c(intercepts[1], betas[1])
> iPred2 = lStanData$X %*% c(intercepts[2], betas[2])
> ## compare with fit.flex
> head(f)
       Comp.1    Comp.2
[1,] 32.54457 20.883670
[2,] 31.98889  5.460877
[3,] 32.19311 11.129077
[4,] 32.87877 30.159296
[5,] 32.20489 11.456077
[6,] 33.19092 38.822976
> head(cbind(iPred1, iPred2))
       [,1]     [,2]
1 20.897771 32.50550
2  5.542358 31.83878
3 11.185795 32.08381
4 30.132872 32.90649
5 11.511366 32.09795
6 38.758702 33.28101

However getting one aggregate predicted value for the response variable may be more appropriate in certain settings, and will require us to take a weighted average of each component (the weight determined by the mixing probability).

> iAggregate = cbind(iPred1, iPred2)
> iAggregate = sweep(iAggregate, 2, iMixWeights, '*')
> iAggregate = rowSums(iAggregate)
> dfPredicted = data.frame(stan=iAggregate, flexmix=p.agg)
> head(dfPredicted)
      stan  flexmix
1 27.39810 27.42164
2 20.26835 20.33447
3 22.88868 22.93915
4 31.68610 31.68404
5 23.03985 23.08942
6 35.69120 35.66522
> ## calculate Mean Squared Error MSE
> mean((dfPredicted$flexmix - NPreg$yn)^2)
[1] 104.4622
> mean((dfPredicted$stan - NPreg$yn)^2)
[1] 104.3325
> ## both pretty close

Comparing the aggregate values from the two models show that both have produced similar results, and the mean squared errors in each case are also similar.

Model Checking: Scoring and Comparing Models

This is another post in the series of model checking posts. Previously we looked at which aspects of the data and model are compatible, using posterior predictive checks. Once we have selected a model or a set of models for the data, we would like to score and compare them. One aspect of comparison using Mixture Distributions and Bayes Factors has been show in a previous post. Here we use posterior predictive checks to calculate information criteria and leave one out cross validation (and adjusting for number of parameters in the model). The examples and algorithms used in the post can be seen in more detail in:

[1] Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2013). Bayesian Data Analysis, Third Edition (Texts in Statistical Science). Book.

[2] Albert, J., Gentleman, R., Parmigiani, G., & Hornik, K. (2009). Bayesian computation with R. Bayesian Computation with R. http://doi.org/10.1007/978-0-387-92298-0

[3] James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning: with applications in R. Springer texts in statistics (Vol. XIV). http://doi.org/10.1007/978-1-4614-7138-7

The predictive accuracy of a model can be used to evaluate and compare a model against other candidate models. The accuracy can be defined using some form of a function, e.g. mean squared error, log-likelihood, log predictive density etc. This ties into the concept of deviance or information criteria, which puts different models (with different parameters) on a common scale (along with some adjustments made for the number of parameters being estimated).

The data to calculate this predictive accuracy can be recycled (within sample) or external (out of sample i.e. new data) or using cross-validation (Ref 3 has a good chapter on cross-validation).

Example Data and Model

The data used in this example is from the ‘bread and peace’ model, and the paper can be seen here, while the example is from Ref [1], and the R code can be found in the github repository.

We first fit a linear model using the lm function in R.

Call:
lm(formula = VoteShare ~ IncomeGrowth, data = dfData)

Residuals:
    Min      1Q  Median      3Q     Max 
-8.8370 -0.3748  0.1379  1.7745  5.6291 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept)   45.8027     1.7297  26.480 1.07e-12 ***
IncomeGrowth   3.1809     0.7226   4.402 0.000715 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.816 on 13 degrees of freedom
Multiple R-squared:  0.5985,    Adjusted R-squared:  0.5676 
F-statistic: 19.38 on 1 and 13 DF,  p-value: 0.0007149

There are three parameters being estimated here, the two coefficients and the standard deviation for the likelihood function.

## write the model with the log posterior function
# use the likelihood with non-informative priors
mylogpost = function(theta, data){
  sigma = exp(theta['sigma'])
  betas = theta[-1]
  x = data$pred
  y = data$resp
  
  # get the predicted or fitted value 
  mModMatrix = model.matrix(y ~ x)
  mCoef = matrix(betas, nrow = length(betas))
  iFitted = mModMatrix %*% mCoef # using identity link
  ## likelihood function
  llik = sum(dnorm(y, iFitted, sigma, log=T))
  ## prior, non informative
  lprior = 1
  lpost = llik + lprior
  return(lpost)
}

We can define this regression model using our custom function mylogpost and we are tracking 3 parameters (sigma, and 2 betas) – we are also using non-informative priors so our results should be similar to the lm model.

> fit = laplace(mylogpost, start, lData)
> fit
$mode
    sigma     beta0     beta1 
 1.268109 45.806382  3.180065 

$var
              sigma         beta0         beta1
sigma  3.335900e-02  0.0002461944 -5.857976e-05
beta0  2.461944e-04  2.5949267841 -8.909553e-01
beta1 -5.857976e-05 -0.8909553156  4.528745e-01

$int
[1] -38.72532

$converge
[1] TRUE

> se = sqrt(diag(fit$var))
> se
    sigma     beta0     beta1 
0.1826445 1.6108776 0.6729595 
## coef and sd
> fit$mode[-1]
    beta0     beta1 
45.806382  3.180065 
> exp(fit$mode[1])
   sigma 
3.554124 
> 

We take a sample from the posterior distribution of these three parameters using a multivariate t proposal density, while the target density is mylogpost using a Sampling Importance Resampling (SIR) algorithm (I will write a post about these sampling methods at some point in the future). You can try using the approach suggested in the post in parameter estimation, but if these are sampled independently then we can not account for the covariance between the parameters – which can be avoided by using a multivariate normal sampling.

## parameters for the multivariate t density
tpar = list(m=fit$mode, var=fit$var*2, df=4)
## get a sample directly and using sir (sampling importance resampling with a t proposal density)
s = sir(mylogpost, tpar, 1000, lData)
#s = rmt(10000, fit$mode, S = fit$var)
sigSample = s[,'sigma']
beta0Sample = s[,'beta0']
beta1Sample = s[,'beta1']

We use the SIR algorithm from [2], but you can write your own, its straight forward once you know how. The tpar is a list of parameters for the multivariate t proposal density. Just to show how our sampler works, we also use STAN to generate samples for these parameters using MCMC. The figure below, shows the samples generated by SIR as histograms, and the lines are the samples generated by STAN and MCMC (they agree pretty well). The fourth plot at the bottom right panel shows the data and the regression line.

breadAndPeaceSample

## first write the log predictive density function
lpd = function(beta0, beta1, sig){
  sigma = exp(sig)
  x = lData$pred
  y = lData$resp
  # get the predicted or fitted value 
  mModMatrix = model.matrix(y ~ x)
  mCoef = matrix(c(beta0, beta1), nrow = 2)
  iFitted = mModMatrix %*% mCoef # using identity link
  ## likelihood function with posterior theta
  return(sum(dnorm(y, iFitted, sigma, log=T)))
}

The scoring function for the model check is the log predictive density, which is basically the log-likelihood function, but using the posterior/fitted parameters.

“In the d-dimensional normal distribution, the logarithm of the density function is a constant plus a \mathit{X_{d}^{2}} distribution divided by -2″ [1],  – and the posterior distribution of the log predictive density has a maximum of  ~ -40.3 and mean of ~ -41.9; with a difference of 1.66, which is close to 3/2 (1.5), predicted from theory, the value of d here is 3 (the number of parameters being estimated). [See Page 171 of Ref 1 for details].

Predictive Scores

We show three predictive scores along with adjustments for number of parameters being estimated, plus leave one out cross validation. The details for these scores can be found in References [1] and [3], and else where in literature. Lower values of these scores imply a higher predictive accuracy.

Akaike Information Criterion (AIC)

This is converted to a deviance scale after adjusting for the number of parameters which is 3 in our case.

> iAIC = (lpd(fit$mode['beta0'], fit$mode['beta1'], fit$mode['sigma']) - 3) * -2
> AIC(fit.lm)
[1] 86.59985
> iAIC
[1] 86.59986

The AIC calculated using lpd function and using the AIC function in R on the linear model fit object are the same.

Deviance Information Criterion (DIC)

This is a somewhat Bayesian version of the AIC, and uses the posterior mean (instead of the maximum likelihood estimate) for θ, and a data-based bias correction.

> ## pDIC are the effective number of parameters
> ## 2 * [lpd(Expectation(theta)) - Expectation(lpd(Sample of thetas from posterior))]
> # calculate E(lpd(theta))
> eLPD = mean(sapply(1:1000, function(x) lpd(beta0Sample[x], beta1Sample[x], sigSample[x])))
> # calculate lpd(E(theta)) and pDIC
> pDIC = 2 *(lpd(fit$mode['beta0'], fit$mode['beta1'], fit$mode['sigma']) - eLPD)
> iDIC = (lpd(fit$mode['beta0'], fit$mode['beta1'], fit$mode['sigma']) - pDIC) * -2
> pDIC
[1] 3.424087
> iDIC
[1] 87.44804

The effective number of parameters will change slightly depending on the simulation size, but both numbers are pretty close (i.e. AIC and DIC).

Watanabe-Akaike or widely available information criterion (WAIC)

This is a more Bayesian approach and is an approximation to cross-validation. We show one version of this approach here. First calculated log pointwise predictive density, which is slightly different to log predictive density calculated earlier.

## log pointwise predictive density
lppd = function(beta0, beta1, sig, index){
  sigma = exp(sig)
  x = lData$pred[index]
  y = lData$resp[index]
  # get the predicted or fitted value 
  mModMatrix = model.matrix(y ~ x)
  mCoef = matrix(c(beta0, beta1), nrow = 2, byrow = T)
  iFitted = mModMatrix %*% mCoef # using identity link
  ## likelihood function with posterior theta
  return(mean(dnorm(y, iFitted, sigma, log=F)))
}
# log pointwise predictive probability of the observed data under the fitted
# model is
ilppd = sum(log(sapply(1:15, function(x) lppd(beta0Sample, beta1Sample, sigSample, x))))

The above function is slightly different than the lpd function, and works basically on one data point at a time (see in index argument).

> ## effective numbers of parameters pWAIC1
> pWAIC1 = 2 * (ilppd - eLPD)
> 
> iWAIC = -2 * (ilppd - pWAIC1)
> pWAIC1
[1] 2.245018
> iWAIC
[1] 86.26897

The effective number of parameters are about 2.2, instead of 3. This is because the effective number of parameters estimated in a model is also a function of the data, and can be considered a random variable – hence in more complex models it is not straight-forward to just count the number of parameters.

Leave-one-out cross-validation

Cross-validation can be computationally expensive and awkward in structured models, but simply put, the data are partitioned into training and test sets. The parameters are estimated on the training set while the fit is evaluated on the test set.

## leave one out cross validation
# we need to fit the model 15 times each time removing one data point
lFits = lapply(1:15, function(x){
  start = c('sigma' = log(sd(dfData$VoteShare[-x])), 'beta0'=0, 'beta1'=0)
  lData = list(pred=dfData$IncomeGrowth[-x], resp=dfData$VoteShare[-x])
  laplace(mylogpost, start, lData)
})

# lets take samples for posterior theta
lSamples = lapply(1:15, function(x){
  fit = lFits[[x]]
  tpar = list(m=fit$mode, var=fit$var*2, df=4)
  ## get a sample directly and using sir (sampling importance resampling with a t proposal density)
  lData = list(pred=dfData$IncomeGrowth[-x], resp=dfData$VoteShare[-x])
  s = sir(mylogpost, tpar, 1000, lData)
  return(s)
})

## calculate lppd on the hold out set
iLppd = sapply(1:15, function(x){
  s = lSamples[[x]]
  log(lppd(s[,'beta0'], s[,'beta1'], s[,'sigma'], x))
})
iLppd = sum(iLppd)
# calculate lppd on all data
# calculated earlier ilppd
# effective number of parameters
iParamCV = ilppd - iLppd
# Given that this model includes two linear coefficients and a variance parameter, these
# all look like reasonable estimates of effective number of parameters. [Gelman 2013]
# on deviance scale iCV is
iCV = -2 * iLppd
# after correction for number of parameters
iCV = -2 * (iLppd - iParamCV)

Generally predictive accuracy measures should be used in parallel with posterior predictive checks – starting with simpler models and expanding. This can be applied to comparing nested models: where the full model implements perhaps all the meaningful parameters, and restricted models (where some parameters are restricted or set to 0 or forced to be equal). In this case the complexity of the model and improvement in fit should justify interpretation and additional difficulty in fitting.

Model Checking: Posterior Predictive Checks

Once a model is fit and parameters estimated, we would look at how well the model explains the data and what aspects of the data generation process in nature are not captured by the model. Most of the material covered in this post follows the examples from:

[1] Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2013). Bayesian Data Analysis, Third Edition (Texts in Statistical Science). Book.

The example data I use here is Simon Newcomb’s experiment to measure the speed of light. The R source code and the data are present in the github repository. Import the data and define a model function i.e. log posterior function to model this data.

## define a log posterior function
lp = function(theta, data){
  # we define the sigma on a log scale as optimizers work better
  # if scale parameters are well behaved
  s = exp(theta[2])
  m = theta[1]
  d = data$vector # observed data vector
  log.lik = sum(dnorm(d, m, s, log=T))
  log.prior = 1
  log.post = log.lik + log.prior
  return(log.post)
}

Here we assume the data can be modelled using a normal distribution and estimate the parameters.

## try the laplace function from LearnBayes
fit = laplace(lp, start, lData)

## lets take a sample from this
sigSample.op = rnorm(1000, fit$mode['sigma'], se['sigma'])
muSample.op = rnorm(1000, mean(lData$vector), exp(sigSample.op)/sqrt(length(lData$vector)))

#### the posterior interval reported on P 67 Gelman [2013] is
## y ± 1.997s/ 66 = [23.6, 28.8]
## compare with our intervals
fit$mode['mu']-1.96*se['mu']; fit$mode['mu']+1.96*se['mu']
##      mu 
## 23.63911 
##      mu 
## 28.78365 
quantile(muSample.op, c(0.025, 0.975))
##     2.5%    97.5% 
## 23.61960 28.83061 

The figure below suggests that the normal model may not be appropriate for this data, as there is an extreme outlier measurement around -44.

speedOfLightHistogram

There are various approaches to model checking e.g.:

  • Using external validation: use data to fit the model (i.e. estimate parameters), make predictions about the new or future data using the model and parameters, collect new real data and compare with the predictions.
  • Recycle existing data: if new data is not available. This also involves other modifications to model fitting cross-validation techniques. I will talk about this further in a future post.

How do we compare the predictions with the real data, i.e. we define some metrics to compare against, which are called Test Quantities.

Prediction of New/Future Data (Posterior Predictive Distribution)

New data or future data can be generated using simulation, from the model and estimated parameters.

## POSTERIOR PREDICTIVE CHECKING
## observed data should look plausible under the posterior predictive distribution
## Draw simulated values from the joint posterior of Yrep and compare to Yobs and look for systematic differences

## Gelman [2013] P 144 -
## sample 66 values, 20 times, each time drawing a fresh draw of sd and mean from the joint posterior
mDraws = matrix(NA, nrow = 66, ncol=20)

for (i in 1:20){
  p = sample(1:1000, size = 1)
  s = exp(sigSample.op[p])
  m = muSample.op[p]
  mDraws[,i] = rnorm(66, m, s)
}

In the example shown above we sample the standard deviation parameter, and then sample the mean parameter (conditioned on the standard deviation, notice we use the same index p). We draw 66 samples and repeat the process 20 times, this is akin to repeating the experiment 20 times and each time taking a 66 measurements. The figure below shows some of the histograms of the 20 simulations and the original data.

speedOfLight20RepsspeedOfLight20Hist_2

It appears that the replications do not include the outlier measurement at -44, and suggests that our current model does not capture that part of the data generation process. Using graphical checks can be a bit tedious in high-throughput settings, and we can use some quantitative checks by defining Test Quantities.

Test Quantities

A test quantity is a function of the original data and replicated data, with some optional additional parameters. We can evaluate the discrepancy in the between the test quantities using the original data and replicated data by calculating a PValue and watch for extreme tail area PValues. We define 5 test quantities in the current case representing: Variance, Mean, Symmetry, Minimum and Maximum.

# The procedure for carrying out a posterior predictive model check requires specifying a test
# quantity, T (y) or T (y, θ), and an appropriate predictive distribution for the replications
# y rep [Gelman 2008]
## variance
T1_var = function(Y) return(var(Y))

## is the model adequate except for the extreme tails
T1_symmetry = function(Y, th){
  Yq = quantile(Y, c(0.90, 0.10))
  return(abs(Yq[1]-th) - abs(Yq[2]-th))
} 

## min quantity
T1_min = function(Y){
  return(min(Y))
} 

## max quantity
T1_max = function(Y){
  return(max(Y))
} 

## mean quantity
T1_mean = function(Y){
  return(mean(Y))
} 

## calculate bayesian p-value for this test statistic
getPValue = function(Trep, Tobs){
  left = sum(Trep <= Tobs)/length(Trep)
  right = sum(Trep >= Tobs)/length(Trep)
  return(min(left, right))
}

Extreme PValues (typically less than 0.01) for the test quantities suggest areas of failure for the model which can be addressed by expanding the model, or ignored if appropriate for the applied question at hand. “The relevant goal is not to answer the question, ‘Do the data come from the assumed model?’ (to which the answer is almost always no), but to quantify the discrepancies between data and model, and assess whether they could have arisen by chance, under the model’s own assumptions.” [1].

Under the normal distribution model, the 5 test quantities show the following PValues.

> mChecks[,'Normal']
Variance Symmetry      Max      Min     Mean 
    0.47     0.16     0.01     0.00     0.47 

The tests for Min and Max quantities fail, suggesting that this model does not perform well in the tail regions. Perhaps using a heavy tailed distribution, like a T distribution with low degrees of freedom or a Contaminated normal distribution will be more useful.

## define a second log posterior function for mixture with contaminated normal distribution
lp2 = function(theta, data){
  # we define the sigma on a log scale as optimizers work better
  # if scale parameters are well behaved
  s = exp(theta[2])
  m = theta[1]
  mix = 0.9
  cont = theta[3]
  d = data$vector # observed data vector
  log.lik = sum(log(dnorm(d, m, s) * mix + dnorm(d, m, s*cont) * (1-mix)))
  log.prior = 1
  log.post = log.lik + log.prior
  return(log.post)
}

# sanity check for function
# choose a starting value
start = c('mu'=mean(ivTime), 'sigma'=log(sd(ivTime)), 'cont'=1)
lp2(start, lData)

## try the laplace function from LearnBayes
fit2 = laplace(lp2, start, lData)
fit2
se2 = sqrt(diag(fit2$var))

sigSample.op = rnorm(1000, fit2$mode['sigma'], se2['sigma'])
muSample.op = rnorm(1000, mean(lData$vector), exp(sigSample.op)/sqrt(length(lData$vector)))

A contaminated normal distribution is a mixture distribution where we add 2 more parameters to the model, the mixing probability and contamination parameter. We fix the mixing probability to 0.9 and track the contamination parameter.

Here are a couple of web links if you are more interested in contaminated normal distributions (Link 1, Link 2).

The third distribution we try is a T distribution with small degrees of freedom. We use a slightly different parameterization of the t-distribution which is nicely explained in another blog here.

lp3 = function(theta, data){
  # function to use to use scale parameter
  ## see here https://grollchristian.wordpress.com/2013/04/30/students-t-location-scale/
  dt_ls = function(x, df, mu, a) 1/a * dt((x - mu)/a, df)
  ## likelihood function
  lf = function(dat, pred){
    return(log(dt_ls(dat, nu, pred, sigma)))
  }
  nu = exp(theta['nu']) ## normality parameter for t distribution
  sigma = exp(theta['sigma']) # scale parameter for t distribution
  m = theta[1]
  d = data$vector # observed data vector
  if (exp(nu) < 1) return(-Inf)
  log.lik = sum(lf(d, m))
  log.prior = 1
  log.post = log.lik + log.prior
  return(log.post)
}

# sanity check for function
# choose a starting value
start = c('mu'=mean(ivTime), 'sigma'=log(sd(ivTime)), 'nu'=log(2))
lp3(start, lData)

op = optim(start, lp3, control = list(fnscale = -1), data=lData)
op$par
exp(op$par[2:3])

## try the laplace function from LearnBayes
fit3 = laplace(lp3, start, lData)
fit3
se3 = sqrt(diag(fit3$var))

sigSample.op = rnorm(1000, fit3$mode['sigma'], se3['sigma'])
nuSample = exp(rnorm(1000, fit3$mode['nu'], se3['nu']))
# threshold the sample values to above or equal to 1
nuSample[nuSample < 1] = 1

## generate random samples from alternative t-distribution parameterization
## see https://grollchristian.wordpress.com/2013/04/30/students-t-location-scale/
rt_ls <- function(n, df, mu, a) rt(n,df)*a + mu
muSample.op = rnorm(1000, mean(lData$vector), exp(sigSample.op)/sqrt(length(lData$vector)))

Using the 3 approaches we calculated the test quantities and the PValues for these quantities.

> round(mChecks, 2)
         Normal NormalCont    T
Variance   0.47       0.34 0.34
Symmetry   0.16       0.10 0.14
Max        0.01       0.06 0.18
Min        0.00       0.21 0.14
Mean       0.47       0.48 0.47

We can see that the normal model does not seem to fit the data well in terms of outliers, while the Contaminated Normal and T distributions, both fit the data better.

The figure below shows the histogram of the data and the smattering of the density lines from the 200 posterior predictive replications of the data. We can see that the normal model does not fit well in the tails, while the other two show  a reasonable fit, at least it can be explained by sampling variation according to PValues.

speedOfLightSmattering

While researching the material for this blog I also came across the post about this data set and extreme outliers in data sets at Andrew Gelman’s blog.

Mixture Distributions and Model Comparison

The following text and code snippets show examples from two books on Bayesian Data Analysis:

[1] Kruschke, J. K. (2014). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan, second edition. Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan, Second Edition. http://doi.org/10.1016/B978-0-12-405888-0.09999-2

[2] Albert, J., Gentleman, R., Parmigiani, G., & Hornik, K. (2009). Bayesian computation with R. Bayesian Computation with R. http://doi.org/10.1007/978-0-387-92298-0

The examples I refer to here can be seen on Page 50 of Ref 2 (Mixtures of conjugate priors) and Chapter 10 of Ref 1 (Model Comparison .. ). I try and connect the two ideas presented there, where two independent models are either modelled together as a mixture distribution or separately and then compared.

Problem:

We can imagine this problem as two machines or two coins, that have a parameter theta θ, in case of coins the proportion of heads. Coin A is biased to produce more tails, Coin B is biased to produce more heads. We are presented with some data i.e. number of heads and number of tails.

  1. Which coin generated this data?
  2. What is the distribution of the parameter theta before and after observing the data?

The R code for this analysis and the figures can be found here.

Following the example from ref [2]. Lets define the two distributions:

### define two models that explain the data
## model m1
## it is a beta distribution with a weight on the left side
g1 = function(theta) dbeta(theta, 6, 14)

## model m2
## it is a beta distribution with a weight on the right side
g2 = function(theta) dbeta(theta, 14, 6)

How much weight we assign to each model i.e. the prior probability for each model, and this can be used to generate the mixture distribution.

## define an array that represents number of models in our parameter space
## each index has a prior weight/probability of being selected
## this can be thought of coming from a categorical distribution 
mix.prior = c(50, 50)
mix.prior = mix.prior/sum(mix.prior)

## define a joint prior space for these 2 models
## (mix.prior[1] AND g1) OR (mix.prior[2] AND g2)
g.mix = function(theta) mix.prior[1]*g1(theta) + mix.prior[2]*g2(theta)

This function represents the prior distribution or prior beliefs about how the parameter theta for each coin is distributed.

m1m2prior

m1m2mixprior

The observed data shows 7 heads and 3 tails. We now define the sampling distribution/likelihood functions and posterior distributions for each model. The posterior distribution represents how much parameter theta changes for each model after observing the data. I have used the word conjugate here, which I will explain at a later time – for the time being think of it as a simple analytical way to calculate the posterior.

## we flip a coin, perform a binary experiment 
## our prior beliefs are that the experiment tends to follow two extreme models
## we will either see more 1s or more 0s, depending on which model (m1 or m2) is 
## more representative of the data
data = c(success=7, fail=3)

## the posterior is proportional to Likelihood * Prior
## P(Data | Theta, Model) X P(Theta, Model)

## define the Likelihood function, both models share the same likelihood functional form
lik = function(data, theta) dbinom(data['success'], sum(data), theta)

## since this is a conjugate analysis, as prior is beta distributed and likelihood is binomial
## the posterior can be derived analytically 
## using a similar analogy as before when describing priors for each model
## we can define a posterior for each of the 2 models
g1.post = function(theta, data) dbeta(theta, 6+data['success'], 14+data['fail'])
g2.post = function(theta, data) dbeta(theta, 14+data['success'], 6+data['fail'])

We have assigned a prior mixture probability in the variable mix.prior, however we need to calculate the posterior mixing probability, which requires some algebra, and I have tried to show that in the code below. For details you have to read the first few pages of Chapter 10 in ref 1.

In order to calculate a mixture probability, i.e. what is the probability each model m1 or m2 after we see the data:

P(Data, Model[1]) = P(Data, Model[1])
P(Model[1] | Data) X P(Data) = P(Data | Model[1]) X P(Model[1])
P(Model[1] | Data) = P(Data | Model[1]) X P(Model[1]) / P(Data) … Equation (1)

where P(Data) can be expanded using summation across all models
P(Data) = Sum for each Model P(Data, Model[i])

We need to calculate a few things:
P(Data | Model[i]) – this is the prior predictive distribution for the data given the selected model. So for each model we solve this equation:

P(Data, Theta) = P(Data | Theta) X P(Theta)

P(Data) = P(Data | Theta) X P(Theta) / P(Theta | Data)
In words this means: Prior predictive distribution for Data = Likelihood X Prior / Posterior

## Prior predictive probability for Data = Likelihood X Prior / Posterior
## for model 1
data.prior.g1 = function(data, theta){
  ret = lik(data, theta) * g1(theta) / g1.post(theta, data)
  return(ret)
}
## for model 2
data.prior.g2 = function(data, theta){
  ret = lik(data, theta) * g2(theta) / g2.post(theta, data)
  return(ret)
}

## P(Data | Model) for each model should be the same for any value of theta
## you can use that as a sanity check
th = seq(0.01, 0.99, by=0.01)
data.g1 = mean(data.prior.g1(data, th))
data.g2 = mean(data.prior.g2(data, th))

We have the necessary requirements to solve the equation 1.

## P(Data) = (P(Data | Model[1]) X P(Model[1])) + (P(Data | Model[2]) X P(Model[2])
## we can calculate the posterior for Model 1
## P(Model[1] | Data) = P(Data | Model[1]) X P(Model[1]) / P(Data)
mix.post = data.g1 * mix.prior[1] / (data.g1 * mix.prior[1] + data.g2 * mix.prior[2])
mix.post = c(mix.post, 1-mix.post)

## (mix.post[1] AND g1.post) OR (mix.post[2] AND g2.post)
g.mix.post = function(theta, data) mix.post[1]*g1.post(theta, data) + mix.post[2]*g2.post(theta, data)

The figures below show the posterior distribution of theta for each of the models of the data i.e. head biased and tail biased coins.

m1m2postm1m2postmix

You can see that the posterior distribution of theta for model one has shifted more to the right (towards higher values of theta), as the data has influenced this shift. The mixture distribution has also shifted most of its weight towards the right with a very slight bump around 0.4.

We can now approximate this distribution of theta on a grid, and take a random sample from this distribution.

## approximate the posterior theta on the grid
p.th = g.mix.post(th, data)
p.th = p.th/sum(p.th)
th.sam = sample(th, 10000, replace = T, prob=p.th)
th.sam = th.sam + runif(10000, 0.01/2 * -1, 0.01/2)
summary(th.sam);
##   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.2041  0.6224  0.6922  0.6748  0.7512  0.9342 

I will add another post on grid approximations at a later point. For the time being, this is a useful method to generate random samples from a distribution and this sample can be used to calculate various statistics or parameters of the distribution.

We will also approximate this using another technique, where we first define a function that takes one or more parameters as input (defining the parameter space) and the data; and returns one value which is the log posterior at that position in the parameter space. The code below defines two functions, one for each model and uses an optimiser to explore the surface of this function to find the maximum.

library(LearnBayes)
library(car)
logit.inv = function(p) {exp(p)/(exp(p)+1) }

mylogpost_m1 = function(theta, data){
  ## theta contains parameters we wish to track
  th = logit.inv(theta['theta'])
  suc = data['success']
  fail = data['fail']
  
  # define likelihood function
  lf = function(s, f, t) return(dbinom(s, s+f, t, log=T))
  
  # calculate log posterior
  val = lf(suc, fail, th) + dbeta(th, 6, 14, log = T)
  return(val)
}

mylogpost_m2 = function(theta, data){
  ## theta contains parameters we wish to track
  th = logit.inv(theta['theta'])
  suc = data['success']
  fail = data['fail']
  
  # define likelihood function
  lf = function(s, f, t) return(dbinom(s, s+f, t, log=T))
  
  # calculate log posterior
  val = lf(suc, fail, th) + dbeta(th, 14, 6, log = T) 
  return(val)
}

# choose sensible starting values for the optimiser
start = c(theta=logit(0.5))
data = c(success=7, fail=3)

fit_m1 = laplace(mylogpost_m1, start, data)
fit_m2 = laplace(mylogpost_m2, start, data)

logit.inv(fit_m1$mode)
##   theta 
##0.428616 
logit.inv(fit_m2$mode)
##   theta 
##0.7142694 

The values for theta representing the maximum of each function are approximately similar to those calculated using grid approximation.

all

The figure above shows the histogram of the random sample generated using grid approximation, the line represents the value of the mixture posterior g.mix.post, and the red  and green lines represent the maximum points for the 2 models calculated using an optimiser.

Bayes Factor:

Simply put, the Bayes Factor (BF) indicates how much the prior odds on the two models change after seeing the data. I.e. how much evidence there is for model 1 vs model 2. As usual the technical details can be found in the references 1 and 2.

## Bayes factor for the ratios of posterior predictive distribution
## of the 2 models
## P(Data | Model[1]) / P(Data | Model[2])
BF = data.g1 / data.g2
## OR posterior odds for the models / prior odds for the 2 models
mix.post[1]/mix.post[2]/(mix.prior[1]/mix.prior[2])

The BF calculated analytically is 0.102 in support of model 1. The general convention for a discrete decision about about the models that there is significant evidence for model 1 when BF > 3, and for model 2 when BF < 1/3 (i.e. 0.33). In our case BF for model 2 is 0.1 which shows significant support for model 2, suggesting that the data was generated from model 2 rather than model 1.

The BF can also be approximated using the optimisation approach shown earlier, where the function laplace returns  the P(Data | Model) in the slot $int.

> round(exp(fit_m1$int - fit_m2$int), 2)
## BF  
##   0.09 

This is very close to the BF of 0.1 calculated analytically. Both analyses suggest that the data was generated by Coin or Machine B.