Skip to content

Latest commit

 

History

History
69 lines (47 loc) · 5.16 KB

README.md

File metadata and controls

69 lines (47 loc) · 5.16 KB

ENCODE ATAC-seq pipeline

DOI

Introduction

The ENCODE ATAC-seq pipeline, with some custom adaptations for our own lab. Note that this is NOT the original repo for the pipeline. Please refer to the ENCODE repository for the original.

This pipeline is designed for automated end-to-end quality control and processing of ATAC-seq or DNase-seq data. The pipeline can be run on compute clusters with job submission engines or stand alone machines. It inherently makes uses of parallelized/distributed computing. Pipeline installation is also easy as most dependencies are automatically installed. The pipeline can be run end-to-end i.e. starting from raw FASTQ files all the way to peak calling and signal track generation; or can be started from intermediate stages as well (e.g. alignment files). The pipeline supports single-end or paired-end ATAC-seq or DNase-seq data (with or without replicates). The pipeline produces formatted HTML reports that include quality control measures specifically designed for ATAC-seq and DNase-seq data, analysis of reproducibility, stringent and relaxed thresholding of peaks, fold-enrichment and pvalue signal tracks. The pipeline also supports detailed error reporting and easy resuming of runs. The pipeline has been tested on human, mouse and yeast ATAC-seq data and human and mouse DNase-seq data.

The ATAC-seq pipeline specification is also the official pipeline specification of the Encyclopedia of DNA Elements (ENCODE) consortium. The ATAC-seq pipeline protocol definition is here. Some parts of the ATAC-seq pipeline were developed in collaboration with Jason Buenrostro, Alicia Schep and Will Greenleaf at Stanford.

Features

  • Flexibility: Support for docker, singularity and Conda.
  • Portability: Support for many cloud platforms (Google/DNAnexus) and cluster engines (SLURM/SGE/PBS).
  • Resumability: Resume a failed workflow from where it left off.
  • User-friendly HTML report: tabulated quality metrics including alignment/peak statistics and FRiP along with many useful plots (IDR/cross-correlation measures).
  • ATAqC: Annotation-based analysis including TSS enrichment and comparison to Roadmap DNase.
  • Genomes: Pre-built database for GRCh38, hg19, mm10, mm9 and additional support for custom genomes.

Installation and tutorial

This pipeline supports many cloud platforms and cluster engines. It also supports docker, singularity and Conda to resolve complicated software dependencies for the pipeline. A tutorial-based instruction for each platform will be helpful to understand how to run pipelines. There are special instructions for two major Stanford HPC servers (SCG4 and Sherlock).

Input JSON file

Input JSON file specification

Output directories

Output directory specification

Useful tools

There are some useful tools to post-process outputs of the pipeline.

qc_jsons_to_tsv

This tool recursively finds and parses all qc.json (pipeline's final output) found from a specified root directory. It generates a TSV file that has all quality metrics tabulated in rows for each experiment and replicate. This tool also estimates overall quality of a sample by a criteria definition JSON file which can be a good guideline for QC'ing experiments.

resumer

This tool parses a metadata JSON file from a previous failed workflow and generates a new input JSON file to start a pipeline from where it left off.

ENCODE downloader

This tool downloads any type (FASTQ, BAM, PEAK, ...) of data from the ENCODE portal. It also generates a metadata JSON file per experiment which will be very useful to make an input JSON file for the pipeline.