Biomarker Discovery: a machine learning workflow applied to Tuberculosis diagnosis.

In our previous work, while working in tuberculosis diagnostics research, we developed some workflows to detect possible biomarkers using Omics data from large cohort studies. Discovered in 19th century, Tuberculosis (TB) is still a serious public health problem and it is estimated that one third of the World’s population is infected with Mycobacterium Tuberculosis (mTB). A... Continue Reading →

Advertisements

Large Effect Sizes: Missing information produce misleading results.

Recently I came across the problem with suspiciously large difference in the averages of two groups while analysing some Omics data. An article dealing with similar issues can be seen here. The data distribution is shown below in Figure 1 (FYI: the fold change was around 6 - which is very large for this kind... Continue Reading →

High Dimensional Data & Hierarchical Regression

In a high-throughput experiment one performs measurements on thousands of variables (e.g. genes or proteins) across two or more experimental conditions. In bioinformatics, we come across such data generated using technologies like Microarrays, Next generation sequencing, Mass spec etc. Data from these technologies have their own pre-processing, normalising and quality checks (see here and here... Continue Reading →

Gene-set enrichment analysis with topGO (part-1)

  Introduction Data analysis performed on high-throughput experiments usually produces lists of significantly perturbed genes (RNASeq) or other entities that can be mapped to genes, like genetic variants (whole genome sequencing) or Transcription factor binding sites (chIPSeq). The long lists of genes (often in the order of hundreds or thousands) produced as the outcome of... Continue Reading →

Powered by WordPress.com.

Up ↑