Methods of handling and working with missing data (part 1)

Description

In biology, the presence of missing values is a common occurrence for example in proteomics and metabolomics study. This represents a real challenge if one intends to perform an objective statistical analysis avoiding misleading conclusions. The leading causes of incompletely observed data are truncation and censoring which are often wrongly used interchangeably. You can refer to the post here that explains the difference between the two.

This blog aims to describe methods of handling missing data including data cleaning and quality checking (part 1) and another blog will follow soon (part 2) to discuss the potential follow-up analysis.

Data cleaning

If the missing data is nominal, the most common approach used is to replace it with different categorical value. Example: Let’s say the feature of interest is hair colour with values brown, blonde and red but for some subjects this information is missing. In that case, one can replace the missing records with ‘not available’ or something similar. By doing this we create a 4th category which will be treated as another dimension in any statistical analysis one does downstream.

On the other hand, if the missing data is on an ordinal or numeric scale, a more sophisticated approach for handling the missing data is required. Dropping or ignoring the records containing missing values does not always works as it assumes that the number of missing values is relatively small and totally at random. However, often this is not the case and can introduce a substantial bias because the information is simply lost. For instance, in gel-based proteomics, the amount of missing data is considerable (~10-50%) and not at random but could be related to the staining procedure used or abundances that are close enough to the limit of detection of the instrument (Pedreschi et al. 2008).

An ad hoc approach often used is to replace the missing data by a fixed value such as mean (in case of normally distributed data) or median (when the data is skewed) of observed values. When a missing value is the result of value being below the detection limit, a threshold or fixed value can be justifiable. This form of ‘data cleaning’ is useful but not always encouraged as it artificially reduces the variance which potentially could affect the strength of relationships with other variables as a single value is used to replace all the missing data.

Approaches like NIPALS directly deals with the missing values during the multivariate analysis and are more sensible way of handling randomly missing data. NIPALS is the acronym for Nonlinear Iterative Partial Least Squares it was introduced for Principal Component Analysis (PCA) and not what we now know as Partial least square regression. This method is generally used in chemometrics and proteomics and is tolerant to small amounts of missing data (upto 5-20%). It performs PCA using an iterative procedure. Uses weighted regressions with null weights for the missing entries thus missing data has no influence on the model.

Another sophisticated approach often used when large amount of data is incomplete is to impute the missing values iteratively during the estimation of the model parameters. For this several methods have been proposed such as k-nearest neighbour, singular value decomposition or maximum likelihood estimation etc. There are several R packages like ‘mice’, ‘Hmisc’, ‘VIM’ that implements some of these imputation algorithms. In the end one must consider the structure of the data and a compromise should be found between a sound statistical and biological interpretation of the data.

Quality Check and exploratory analysis

Suppose you have a dataset with some missing data.

# Load the required R packages
library(plsdepot)
library(cluster)

# Here we create a random dataframe
df <- data.frame(A = 1:10, B = 11:20, C = 1:10)
head(df)

##   A B C
## 1 1 11 1
## 2 2 12 2
## 3 3 13 3
## 4 4 14 4
## 5 5 15 5
## 6 6 16 6

# Then we add few missing values (NA) in the dataframe
df_miss<-as.data.frame(lapply(df, function(cc) cc[ sample(c(TRUE, NA), prob = c(0.85, 0.15), size = length(cc), replace = TRUE) ]))
head(df_miss)

##    A B C
## 1 NA 11 1
## 2  2 12 2
## 3  3 13 NA
## 4 NA 14 NA
## 5  5 15 5
## 6  6 16 6

A) Principal Component Analysis (PCA)

Normal PCA to check the quality of the samples won’t work in this case as ‘prcomp’ function in R does not work with missing values. Instead, one can use NIPALS algorithm to compute PCA scores and loadings. In R,  library ‘plsdepot‘ implements NIPALS.


#----Using nipals to perform calculate PCA loadings and scores
pc_nipal<- nipals (df_miss, comps = 2, scaled = TRUE)

#----Plot PCA on rows/observations/samples
plot (pc_nipal, what = "observation", comps = c(1, 2), cex = 0.6,show.names = TRUE, col.labels = "red")

B) Circle of correlation plot

Variables are displayed using the correlations of each block of variables with the components of the other block.

#----Plot PCA on columns/variables
plot (pc_nipal, what = "variables", comps = c(1, 2), cex = 0.5,xlim=c(-50,150),show.names = TRUE, offset = 0.1,col.arrows = "#FE992955",col.points = "#5592e3",col.axis = "black") 

plot2

In general, the plot represents the correlation between the variables/features:

  • The closer a variable appears on the perimeter of the circle, the better it is represented.
  • In addition, if two variables are highly correlated, they will appear near each other.
  • If two variables are negatively correlated, they will tend to appear in opposite extremes.
  • If two variables are uncorrelated, they will be orthogonal to each other.
C)Clustering

Similarly, to perform clustering without removing rows where NAs are present, the gower distance metric can be used. It is a dissimilarity/distance coefficient that handles missing data well and implemented in function ‘daisy‘ in the R package ‘cluster‘.

#----Compute all the pairwise dissimilarities (distances) between observations in the data set
diss<-daisy(t(df_miss),metric="gower")

#---Computes agglomerative hierarchical clustering of the dataset.
distance_agnes<-agnes(diss,metric = "euclidean",method="complete")

hcd<-as.dendrogram(distance_agnes)

plot(distance_agnes,which.plots = 2,main="Dendrogram with Daisy function(grower metric)")

plot3

Therefore, this is a good alternative to quality check data with missingness instead of discarding data or introducing any form of bias in your analysis.

The information about the R packages used can be found below.

# print the package versions used ---#
sessionInfo()

## R version 3.3.1 (2016-06-21)
## Platform: x86_64-apple-darwin13.4.0 (64-bit)
## Running under: OS X 10.12.5 (Sierra)
##
## locale:
## [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] cluster_2.0.5 plsdepot_0.1.17
##
## loaded via a namespace (and not attached):
## [1] backports_1.0.4 magrittr_1.5 rprojroot_1.2 tools_3.3.1
## [5] htmltools_0.3.5 yaml_2.1.14 Rcpp_0.12.9 stringi_1.1.2
## [9] rmarkdown_1.5 knitr_1.16 stringr_1.1.0 digest_0.6.12
## [13] evaluate_0.10
Advertisements

One thought on “Methods of handling and working with missing data (part 1)

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: