Skip to content

Commit

Permalink
Merge pull request #4 from pughlab/Yong-patch-1
Browse files Browse the repository at this point in the history
Yong patch 1
  • Loading branch information
yzeng-lol authored May 31, 2023
2 parents aba40c8 + cedba32 commit fa70560
Show file tree
Hide file tree
Showing 14 changed files with 3,820 additions and 25 deletions.
10 changes: 6 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# MEDIPIPE: (cf)MeDIP-seq Data QC and Analysis Pipeline (v1.0.0)
# MEDIPIPE: (cf)MeDIP-seq Data QC and Analysis Pipeline (v1.1.0)

## Intoduction
The MEDIPIPE is designed for automated end-to-end quality control (QC) and analysis of (cf)MeDIP-seq data. The pipeline starts from the raw FASTQ files all the way to QC, alignment, methylation quantifications and aggregation. The pipeline was developed by [Yong Zeng](mailto:[email protected]) based on some prior work of Wenbin Ye, [Eric Zhao](https://github.com/pughlab/cfMeDIP-seq-analysis-pipeline).

### Features
- **Portability**: MEDIPIPE was developed with [Snakemake](https://snakemake.readthedocs.io/en/stable/index.html), which will automatically deploy the execution environments. It can also be performed across different cluster engines (e.g. SLURM) or stand-alone machines.
- **Portability**: MEDIPIPE was developed with [Snakemake](https://snakemake.readthedocs.io/en/stable/index.html), which will automatically deploy the execution environments. Users can also specify the version of softwares to be install in YAML files. It can also be performed across different cluster engines (e.g. SLURM) or stand-alone machines.
- **Flexibility**: MEDIPIPE can deal with single-end or paired-end reads, which comes along with or without spike-in/UMI sequences. It can be applied to individual samples, as well as to aggregate multiple samples from large-scale profiling.

### Citation
Expand All @@ -16,7 +16,7 @@ This schematic diagram shows you how pipeline works:


## Installation
1) Make sure that you have a Conda-based Python3 distribution(e.g.,the [Miniconda](https://docs.conda.io/en/latest/miniconda.html)). The Miniconda3-py38_23.3.1-0-Linux-x86_64.sh for Linux is prefered to avoid potential cnflicts. The installation of [Mamba](https://github.com/mamba-org/mamba) is also recommended:
1) Make sure that you have a Conda-based Python3 distribution(e.g.,the [Miniconda](https://docs.conda.io/en/latest/miniconda.html)). The Miniconda3-py38_23.3.1-0-Linux-x86_64.sh for Linux is prefered to avoid potential conflicts. The installation of [Mamba](https://github.com/mamba-org/mamba) is also recommended:

```bash
$ conda install -n base -c conda-forge mamba
Expand All @@ -39,7 +39,7 @@ This schematic diagram shows you how pipeline works:
> **IMPORTANT**: EXTRA ENVIRONMENTS WILL BE INSTALLED, MAKE SURE YOU STILL HAVE INTERNET ACCESS.
* **Step 1:** Prepare reference, samples FASTQ and aggregation files according to [templates](./test/README.md).
* **Step 2:** Specify input configuration file by following the instructions [here](./test/README.md).
* **NOTE:** For testing run, you can simply run the SED command below to specify files in Step1,2.The toy dataset will fail to fit sigmoid model for MEDStrand due to low read depth, but you can still find other results in ./test/Res.
* **NOTE:** For testing run, you can simply run the SED command below to specify files in Step1, 2. This test dataset aims for a swift extra enviroments installation, which will fail to fit sigmoid model for MEDStrand due to low read depth, but you can still find other results in ./test/Res.

```bash
$ sed -i 's,/path/to,'"$PWD"',g' ./test/*template.*
Expand All @@ -50,6 +50,8 @@ This schematic diagram shows you how pipeline works:
--use-conda --cores 4 -p
```

* **Step 3 (Optional):** You can perform the full-scale test run using another [toy dataset](./test/README.md) by following step 1, 2.

5) Run on HPCs

You can also submit this pipeline to clusters with the template ./workflow/sbatch_Snakefile_template.sh. This template is for SLURM, however, it could be modified to different resource management systems. More details about cluster configuration can be found at [here](https://snakemake.readthedocs.io/en/stable/executing/cluster.html).
Expand Down
6 changes: 3 additions & 3 deletions assets/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,15 @@ You can download reference genome, pre-build BWA index and annotated regions (e.

```bash
## eg: ./download_build_reference.sh hg38 /your/genome/data/path/hg38
$ ./workflow/download_reference.sh [GENOME] [DEST_DIR]
$ ./assets/Reference/download_reference.sh [GENOME] [DEST_DIR]
```

* Build reference genomes index
If your sequencing libraries come with spike-ins, you can build new aligner index after combining spike-in genome with human genome. The new index information will be appended to corresponding manifest file.

```bash
## eg: ./build_reference_index.sh hg38 ./data/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38
$ ./workflow/build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]
## eg: ./assets/Reference/build_reference_index.sh hg38 ./data/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38
$ ./assets/Reference/build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]
```


Expand Down
7 changes: 3 additions & 4 deletions assets/Reference/build_reference_index.sh
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
#!/bin/bash

## the script will build BWA index for combined human and spike-in genomes.
## "Usage: ./build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]"
## "Example: ./build_reference_index.sh hg38 ./data/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38"
## "Example: ./build_reference_index.sh hg19 ./data/BAC_F19K16_F24B22.fa hg19_BAC_F19K16_F24B22 /cluster/projects/tcge/DB/cfmedip-seq-pepeline/hg19"
## "Usage: ./assets/Reference/build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]"
## "Example: ./assets/Reference/build_reference_index.sh hg38 ./assets/Spike-in_genomes/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38"

#################
## initilizaiton
Expand Down Expand Up @@ -33,7 +32,7 @@ cat ${hg_fa} ${SPIKEIN_FA} > ${DEST_DIR}/${INDEX_PREFIX}.fa
cd ${DEST_DIR}

echo "=== Building bwa index for mereged genomes ..."
conda activate tcge-cfmedip-seq-pipeline
conda activate MEDIPIPE

bwa index -a bwtsw ${INDEX_PREFIX}.fa

Expand Down
16 changes: 8 additions & 8 deletions assets/Reference/download_reference.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@
## "A TSV file [DEST_DIR]/[GENOME].tsv will be generated. Use it for pipeline."
## "Supported genomes: hg19 and hg38"; Arabidopsis TAIR10 genome will be downloaded,
## as well as building bwa index for merged genomes.
## "Usage: ./download_build_reference.sh [GENOME] [DEST_DIR]"
## "Example: ./download_build_reference.sh hg38 /your/genome/data/path/hg38"
## "Usage: ./assets/Reference/download_build_reference.sh [GENOME] [DEST_DIR]"
## "Example: ./assets/Reference/download_build_reference.sh hg38 /your/genome/data/path/hg38"


#################
Expand Down Expand Up @@ -47,7 +47,7 @@ if [[ "${GENOME}" == "hg38" ]]; then
PROM="https://www.encodeproject.org/files/ENCFF140XLU/@@download/ENCFF140XLU.bed.gz"
ENH="https://www.encodeproject.org/files/ENCFF212UAV/@@download/ENCFF212UAV.bed.gz"

REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
#REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"

fi

Expand All @@ -68,7 +68,7 @@ if [[ "${GENOME}" == "hg19" ]]; then
ENH="https://storage.googleapis.com/encode-pipeline-genome-data/hg19/ataqc/reg2map_honeybadger2_dnase_enh_p2.bed.gz"

## Arabidopsis
REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
# REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"

fi

Expand All @@ -84,12 +84,12 @@ wget -c -O $(basename ${REF_MITO_FA}) ${REF_MITO_FA}
wget -c -O $(basename ${CHRSZ}) ${CHRSZ}

## TAIR10
wget -c -O $(basename ${REF_FA_TAIR10}) ${REF_FA_TAIR10}
sed -i -e 's/^>/>tair10_chr/' TAIR10_chr_all.fas
gzip TAIR10_chr_all.fas
#wget -c -O $(basename ${REF_FA_TAIR10}) ${REF_FA_TAIR10}
#sed -i -e 's/^>/>tair10_chr/' TAIR10_chr_all.fas
#gzip TAIR10_chr_all.fas

## combine genomes
cat $(basename ${REF_FA}) TAIR10_chr_all.fas.gz > ${GENOME}_tair10.fa.gz
# cat $(basename ${REF_FA}) TAIR10_chr_all.fas.gz > ${GENOME}_tair10.fa.gz

## annotated regions
wget -N -c ${BLACKLIST}
Expand Down
2 changes: 1 addition & 1 deletion conda_env.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ dependencies:
- snakemake=6.12.3
- bwa=0.7.17
- trim-galore=0.6.7
- samtools=1.9
- samtools=1.17
- graphviz=2.40.1
- picard=2.26.6
Binary file modified figures/MEDIPIPE_Flowchart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added test/Fastq/toy_sample1_R1.fastq.gz
Binary file not shown.
Binary file added test/Fastq/toy_sample1_R2.fastq.gz
Binary file not shown.
Binary file added test/Fastq/toy_sample2_R1.fastq.gz
Binary file not shown.
Binary file added test/Fastq/toy_sample2_R2.fastq.gz
Binary file not shown.
11 changes: 7 additions & 4 deletions test/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,12 @@
A config YAML file specifies all PATHs of input files and parameters that are necessary for successfully running this pipeline. This includes a specification of the path to the genome reference files. Please make sure to specify absolute paths rather than relative paths in your config files. More detailed instructions can be found at the [config_template](./test/config_template.yaml)


## Test dataset
## Test datasets

The current toy cfMeDIP-seq data is randomly derived from the two BACs ([F19K16](https://www.arabidopsis.org/servlets/TairObject?type=assembly_unit&id=362) and [F24B22](https://www.arabidopsis.org/servlet/TairObject?type=AssemblyUnit&name=F24B22)) Arabidopsis by [Min Han](https://github.com/mhanbioinfo/make_toy_fastqs)
There are two randomly generate cfMeDIP-seq testing datasets, more details can be found [here](https://github.com/mhanbioinfo/make_toy_fastqs).

`Reference/`: Genome reference and BWA index (Athaliana.BAC.F19K16.F24B22 )
`Fastq/` : Randomly generated paired-end FastQ reads
`Reference/`: Genome reference and BWA index (Athaliana.BAC.F19K16.F24B22) for extra environments installation.

`Fastq/` : Randomly generated paired-end FASTQ reads. Sample A, B were derived from two Athaliana BACs; toy sample 1, 2 incopporated both human and two BACs sequences with UMI barcodes.

`Res/` : The outcomes for test run (sample A, B) will be depoisted here. In addtion, the aggregated QC reports for the 2 toy samples and 163 brain cancer samples from this [study](https://www.nature.com/articles/s41591-020-0932-2) were attched.
File renamed without changes.
3,791 changes: 3,791 additions & 0 deletions test/Res/aggr_qc_report_toy.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion workflow/rules/extra_env/umi_tools.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ channels:
- bioconda
dependencies:
- umi_tools=1.0.1
- samtools=1.9
- samtools=1.17

0 comments on commit fa70560

Please sign in to comment.