In our previous work, while working in tuberculosis diagnostics research, we developed some workflows to detect possible biomarkers using Omics data from large cohort studies. Discovered in 19th century, Tuberculosis (TB) is still a serious public health problem and it is estimated that one third of the World’s population is infected with Mycobacterium Tuberculosis (mTB). A... Continue Reading →

# Large Effect Sizes: Missing information produce misleading results.

Recently I came across the problem with suspiciously large difference in the averages of two groups while analysing some Omics data. An article dealing with similar issues can be seen here. The data distribution is shown below in Figure 1 (FYI: the fold change was around 6 - which is very large for this kind... Continue Reading →

# High Dimensional Data & Hierarchical Regression

In a high-throughput experiment one performs measurements on thousands of variables (e.g. genes or proteins) across two or more experimental conditions. In bioinformatics, we come across such data generated using technologies like Microarrays, Next generation sequencing, Mass spec etc. Data from these technologies have their own pre-processing, normalising and quality checks (see here and here... Continue Reading →

# Logistic “Aggression”: binary classification problems

Binary problems, where the outcome can be either True or False are very common in data analysis, from an inference or classification point of view. A previous post on binomial modelling deals with a similar problem, but this time we frame the problem from a regression or generalized linear model (GLM) view point. Previously we... Continue Reading →

# Gene-set enrichment analysis with topGO (part-1)

Introduction Data analysis performed on high-throughput experiments usually produces lists of significantly perturbed genes (RNASeq) or other entities that can be mapped to genes, like genetic variants (whole genome sequencing) or Transcription factor binding sites (chIPSeq). The long lists of genes (often in the order of hundreds or thousands) produced as the outcome of... Continue Reading →

# Next Generation Sequencing Data Quality Checks

Analysing a variety of Next Generation Sequencing (NGS) data sets from different projects over the past years, we have developed a general workflow to assess data quality. This is a guideline and can be applied at various steps of the analysis, starting with raw FASTQ file checks. FASTQ Quality Checks: Generally the simplest tool to... Continue Reading →

# Hierarchical Models: A Binomial Model with Shrinkage

The material in this post comes from various sources, some of which can be found in [1] Kruschke, J. K. (2014). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan, second edition. Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan, Second Edition. http://doi.org/10.1016/B978-0-12-405888-0.09999-2 [2] Gelman, A., Carlin, J. B., Stern,... Continue Reading →

# Pattern Recognition using PCA: Variables and their Geometric Relationships

Principal component analysis is a commonly used technique in multi-variate statistics and pattern recognition literature. In this post I try to merge ideas of Geometric and Algebraic interpretation of data as vectors in a vector space and its relationship with PCA. The 3 major sources used in this blog are: [1] Thomas D. Wickens (1995). The... Continue Reading →

# Methods of handling and working with missing/censored data (part-2)

Description As discussed in my last blog here, missing data in big data analysis cannot always be ignored and requires a good understanding of the data and user decisions on how to handle this scenario. In biology, this generally occurs when the data is subjected to limits of detection or quantification (censoring or truncation mechanism). These... Continue Reading →