TALK TO AN EXPERT
RESOURCES

Demystifying Hi-C Data Normalization: A Guide for Genomic Researchers

Genomic research has come a long way since the draft of the human genome was first published over two decades ago, driven by the continuous advancements in next-generation sequencing (NGS) technologies. One of the most transformative methods to emerge from these developments is Hi-C, a library preparation technique that provides unique insights into the three-dimensional organization of the genome.

However, analyzing Hi-C data presents its own set of challenges, particularly regarding data normalization. We’ve created a comprehensive guide to delve into the intricacies of Hi-C data normalization providing you with a deep understanding of its importance and the various tools and methods available for this critical step in the analysis pipeline (see link below).

Understanding Normalization Approaches

Normalization approaches for Hi-C data can be categorized into two main types: explicit and implicit. Explicit approaches aim to directly account for individual biases, including GC content, fragmentation, mappability, and enzyme cut sites. They rely on the assumption that these biases are well understood and can be accurately accounted for. The two primary explicit normalization methods are:

  • Yaffe and Tanay’s Probabilistic Model: A pioneering probabilistic model designed to account for known biases.
  • HiCNorm: An algorithm built upon similar principles to Yaffe and Tanay’s model.

Implicit approaches, on the other hand, make use of the assumption of “equal loci visibility.” They assume that cumulative bias is captured within the sequencing depth of each bin of the contact matrix. Some common implicit methods include:

  • Sequential Component Normalization (SCN)
  • Iterative Correction and Eigenvector Decomposition (ICE)
  • Knight and Ruiz (KR) Method
  • ChromoR: Utilizes a Bayesian approach.
  • Binless: A relatively new algorithm offering a hybrid approach.
  • Vanilla Coverage (VC) and Square Root Supplement (VCSQ): Simpler implicit algorithms, but less widely used.

The choice between explicit and implicit methods depends on the level of understanding of biases and the complexity of the genome being studied. Explicit methods require more user-defined parameters, making them suitable for less-studied organisms where biases may be poorly understood. Implicit methods, with their simplicity, are often preferred for well-studied genomes like human and mouse.

Visual Comparison of Normalization Results.Each contact matrix depicts the same 2Mb region on chromosome 8 of a Micro-C library that was sequenced with 800 million read pairs. The matrix was subjected to different normalization approaches and plotted in R. The scale bar is consistently maintained across each normalization approach to better visualize the impact of normalization. This image clearly demonstrates the challenges associated with using coverage alone as a normalization strategy, whereas the more iterative approaches yield a clearer picture of chromatin interactions.

Choosing the Right Approach

As the field of Hi-C data analysis continues to evolve, researchers often grapple with the choice of normalization method and pipeline. To date, no single “gold standard” method has emerged. Various studies have compared different algorithms, with Rao et al. opting for the KR method due to its computational efficiency. However, KR may falter with sparse contact matrices, in which case ICE, a robust balancing method, can be used. Overall, SCN, KR, and ICE strategies tend to perform similarly, with only minor differences at lower resolutions.

In practical terms, the choice between these approaches often depends on the tools and pipelines you are using, as many provide KR and ICE as common normalization methods. The field is rapidly advancing, and newer algorithms like Binless may influence the consensus in the future.

For a more detailed explanation of the various methods of data normalization for Hi-C data, we’ve created a white paper.  Access it here.  

3D Genomics, Epigenetics, Hi-C, LinkPrep™, News
Targeting Super-Enhancer Driven Genes in Multiple Myeloma 
Super-Enhancers and Their Role in Multiple Myeloma  Enhancers are regulatory elements located throughout the genome that bind transcription factors and...
Read More
3D Genomics, LinkPrep™, News, Variant Detection, Variant Detection
 Dovetail Genomics Secures $2M SBIR Grant to Develop High-Throughput LinkPrep™ Solutions for Oncology Applications
Grant supports the expansion of their rapid, high-resolution LinkPrep™ technology in transforming cancer research SCOTTS VALLEY, CA – November 6,...
Read More
3D Genomics, Genotyping, News, Variant Detection, Variant Detection
Dovetail Genomics Showcases LinkPrep™ Technology Performance in Multiple Myeloma at ASHG
Rapid, Highly Sensitive Detection of Structural Variants Now Possible Using Standard Short-Read Sequencing Platforms SCOTTS VALLEY, CA – October 31,...
Read More
© Copyright 2023 Cantata Bio, All Rights Reserved
Privacy Policy
magnifiercrosschevron-down