diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..fd874fee --- /dev/null +++ b/404.html @@ -0,0 +1 @@ +
TODO -- describe gitflow, require PRs...
Pre-commit can automatically format your code, check for spelling errors, etc. every time you commit.
Install pre-commit if you haven't already, then run pre-commit install
to install the hooks specified in .pre-commit-config.yaml
. Pre-commit will run the hooks every time you commit.
Increment the version number following semantic versioning1 in the VERSION
file.
Keep the changelog up to date with all notable changes in CHANGELOG.md
2.
If you use VS code, installing nf-core extension pack is recommended.
semantic versioning guidelines https://semver.org/ ↩
changelog guidelines: https://keepachangelog.com/en/1.1.0/ ↩
Should include a list of all contributors, including GitHub handles when appropriate. In addition, a statement of who contributed to the source code specifically, identified by initials. An example is included below.
TODO: populate this automagically similar to https://nf-co.re/contributors? or link to GitHub contributor page? could use gh action: https://github.com/lowlighter/metrics/blob/master/source/plugins/contributors/README.md
The following members contributed to the development of the CARLISLE pipeline:
SS contributed to the generating the source code and all members contributed to the main concepts and analysis.
CH**rom**A**tin i**M**muno **P**recipit**A**tion sequencin**G a**N**alysis pip**E**line
🚧 This project is under active development. It is not yet ready for production use. 🚧
TODO
You can run champagne from the command line. The CLI includes helper steps for execution on supported high performance computing clusters including Biowulf and FRCE.
Install the champagne CLI:
cd CHAMPAGNE
+pip3 install .
+
Run the test dataset using the test profile:
champagne run -profile test,prod,singularity
+
or explicitly specify the output directory and input:
champagne run -profile prod,singularity --outdir results/test --input assets/samplesheet_test.csv
+
Launch a stub run to view the steps that will run without performing the full analysis.
champagne run -profile ci_stub -stub
+
You can run the nextflow pipeline directly by specifying this GitHub repo. You will need nextflow and either singularity or docker installed.
nextflow run CCBR/CHAMPAGNE -profile test,prod,singularity
+
You can specify a specific version, tag, or branch with -r
:
nextflow run CCBR/CHAMPAGNE -r v1.0.0 -profile test,prod,singularity
+
Come across a bug? Open an issue and include a minimal reproducible example.
Have a question? Ask it in discussions.
Want to contribute to this project? Check out the contributing guidelines.
This repo was originally generated from the CCBR Nextflow Template. The template takes inspiration from nektool1 and the nf-core template. If you plan to contribute your pipeline to nf-core, don't use this template -- instead follow nf-core's instructions2.
Information on who the pipeline was developed for, and a statement if it's only been tested on Biowulf. For example:
It has been developed and tested solely on NIH HPC Biowulf.
Also include a workflow image to summarize the pipeline.
instructions for nf-core pipelines https://nf-co.re/docs/contributing/tutorials/creating_with_nf_core ↩
CH**rom**A**tin i**M**muno **P**recipit**A**tion sequencin**G a**N**alysis pip**E**line
\ud83d\udea7 This project is under active development. It is not yet ready for production use. \ud83d\udea7
"},{"location":"#getting-started","title":"Getting started","text":"TODO
"},{"location":"#usage","title":"Usage","text":""},{"location":"#champagne-cli","title":"champagne CLI","text":"You can run champagne from the command line. The CLI includes helper steps for execution on supported high performance computing clusters including Biowulf and FRCE.
Install the champagne CLI:
cd CHAMPAGNE\npip3 install .\n
Run the test dataset using the test profile:
champagne run -profile test,prod,singularity\n
or explicitly specify the output directory and input:
champagne run -profile prod,singularity --outdir results/test --input assets/samplesheet_test.csv\n
Launch a stub run to view the steps that will run without performing the full analysis.
champagne run -profile ci_stub -stub\n
"},{"location":"#nextflow-pipeline","title":"nextflow pipeline","text":"You can run the nextflow pipeline directly by specifying this GitHub repo. You will need nextflow and either singularity or docker installed.
nextflow run CCBR/CHAMPAGNE -profile test,prod,singularity\n
You can specify a specific version, tag, or branch with -r
:
nextflow run CCBR/CHAMPAGNE -r v1.0.0 -profile test,prod,singularity\n
"},{"location":"#help-contributing","title":"Help & Contributing","text":"Come across a bug? Open an issue and include a minimal reproducible example.
Have a question? Ask it in discussions.
Want to contribute to this project? Check out the contributing guidelines.
"},{"location":"#references","title":"References","text":"This repo was originally generated from the CCBR Nextflow Template. The template takes inspiration from nektool1 and the nf-core template. If you plan to contribute your pipeline to nf-core, don't use this template -- instead follow nf-core's instructions2.
Information on who the pipeline was developed for, and a statement if it's only been tested on Biowulf. For example:
It has been developed and tested solely on NIH HPC Biowulf.
Also include a workflow image to summarize the pipeline.
nektool https://github.com/beardymcjohnface/nektool \u21a9
instructions for nf-core pipelines https://nf-co.re/docs/contributing/tutorials/creating_with_nf_core \u21a9
TODO -- describe gitflow, require PRs...
"},{"location":"contributing/#use-pre-commit-hooks","title":"Use pre-commit hooks","text":"Pre-commit can automatically format your code, check for spelling errors, etc. every time you commit.
Install pre-commit if you haven't already, then run pre-commit install
to install the hooks specified in .pre-commit-config.yaml
. Pre-commit will run the hooks every time you commit.
Increment the version number following semantic versioning1 in the VERSION
file.
Keep the changelog up to date with all notable changes in CHANGELOG.md
2.
If you use VS code, installing nf-core extension pack is recommended.
semantic versioning guidelines https://semver.org/ \u21a9
changelog guidelines: https://keepachangelog.com/en/1.1.0/ \u21a9
Should include a list of all contributors, including GitHub handles when appropriate. In addition, a statement of who contributed to the source code specifically, identified by initials. An example is included below.
TODO: populate this automagically similar to https://nf-co.re/contributors? or link to GitHub contributor page? could use gh action: https://github.com/lowlighter/metrics/blob/master/source/plugins/contributors/README.md
"},{"location":"contributors/#contributions","title":"Contributions","text":"The following members contributed to the development of the CARLISLE pipeline:
SS contributed to the generating the source code and all members contributed to the main concepts and analysis.
"},{"location":"user-guide/getting-started/","title":"1. Getting Started","text":"This should set the stage for all of the pipeline requirements. Examples are listed below.
"},{"location":"user-guide/getting-started/#overview","title":"Overview","text":"The CARLISLE github repository is stored locally, and will be used for project deployment. Multiple projects can be deployed from this one point simultaneously, without concern.
"},{"location":"user-guide/getting-started/#1-getting-started","title":"1. Getting Started","text":""},{"location":"user-guide/getting-started/#11-introduction","title":"1.1 Introduction","text":"The CARLISLE Pipelie beings with raw FASTQ files and performs trimming followed by alignment using BOWTIE2. Data is then normalized through either the use of an user-species species (IE E.Coli) spike-in control or through the determined library size. Peaks are then called using MACS2, SEACR, and GoPEAKS with various options selected by the user. Peaks are then annotated, and summarized into reports. If designated, differential analysis is performed using DESEQ2. QC reports are also generated with each project using FASTQC and MULTIQC. Annotations are added using HOMER and ROSE. GSEA Enrichment analysis predictions are added using CHIPENRICH.
The following are sub-commands used within CARLISLE:
CARLISLE has several dependencies listed below. These dependencies can be installed by a sysadmin. All dependencies will be automatically loaded if running from Biowulf.
CARLISLE has been exclusively tested on Biowulf HPC. Login to the cluster's head node and move into the pipeline location.
# ssh into cluster's head node\nssh -Y $USER@biowulf.nih.gov\n
"},{"location":"user-guide/getting-started/#14-load-an-interactive-session","title":"1.4 Load an interactive session","text":"An interactive session should be started before performing any of the pipeline sub-commands, even if the pipeline is to be executed on the cluster.
# Grab an interactive node\nsinteractive --time=12:00:00 --mem=8gb --cpus-per-task=4 --pty bash\n
"},{"location":"user-guide/output/","title":"4. Expected Output","text":"This should include all pertitant information about output files, including extensions that differentiate files. An example is provided below.
"},{"location":"user-guide/output/#4-expected-outputs","title":"4. Expected Outputs","text":"The following directories are created under the WORKDIR/results directory:
\u251c\u2500\u2500 alignment_stats\n\u251c\u2500\u2500 bam\n\u251c\u2500\u2500 peaks\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0.05\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 contrasts\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 contrast_id1.dedup_status\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 contrast_id2.dedup_status\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 gopeaks\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 annotation\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 go_enrichment\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 contrast_id1.dedup_status.go_enrichment_tables\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 contrast_id2.dedup_status.go_enrichment_html_report\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homer\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id1_vs_control_id.dedup_status.gopeaks_broad.motifs\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homerResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 knownResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id1_vs_control_id.dedup_status.gopeaks_narrow.motifs\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homerResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 knownResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id2_vs_control_id.dedup_status.gopeaks_broad.motifs\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homerResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 knownResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id2_vs_control_id.dedup_status.gopeaks_narrow.motifs\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homerResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 knownResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 rose\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id1_vs_control_id.dedup_status.gopeaks_broad.12500\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id1_vs_control_id.dedup_status.gopeaks_narrow.12500\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id2_vs_control_id.dedup_status.dedup.gopeaks_broad.12500\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id2_vs_control_id.dedup_status.dedup.gopeaks_narrow.12500\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 peak_output\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 macs2\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 annotation\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 go_enrichment\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 contrast_id1.dedup_status.go_enrichment_tables\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 contrast_id2.dedup_status.go_enrichment_html_report\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homer\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id1_vs_control_id.dedup_status.macs2_narrow.motifs\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homerResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 knownResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id1_vs_control_id.dedup_status.macs2_broad.motifs\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homerResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 knownResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id2_vs_control_id.dedup_status.macs2_narrow.motifs\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homerResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 knownResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id2_vs_control_id.dedup_status.macs2_broad.motifs\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 homerResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 knownResults\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 rose\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id1_vs_control_id.dedup_status.macs2_broad.12500\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id1_vs_control_id.dedup_status.macs2_narrow.12500\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id2_vs_control_id.dedup_status.macs2_broad.12500\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 replicate_id2_vs_control_id.dedup_status.macs2_narrow.12500\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 peak_output\n
"},{"location":"user-guide/preparing-files/","title":"2. Preparing Files","text":"This should describe any input files needed, including config files, manifest files, and sample files. An example is provided below.
"},{"location":"user-guide/preparing-files/#2-preparing-files","title":"2. Preparing Files","text":"The pipeline is controlled through editing configuration and manifest files. Defaults are found in the /WORKDIR/config and /WORKDIR/manifest directories, after initialization.
"},{"location":"user-guide/preparing-files/#21-configs","title":"2.1 Configs","text":"The configuration files control parameters and software of the pipeline. These files are listed below:
The cluster configuration file dictates the resources to be used during submission to Biowulf HPC. There are two different ways to control these parameters - first, to control the default settings, and second, to create or edit individual rules. These parameters should be edited with caution, after significant testing.
"},{"location":"user-guide/preparing-files/#212-tools-config","title":"2.1.2 Tools Config","text":"The tools configuration file dictates the version of each software or program that is being used in the pipeline.
"},{"location":"user-guide/preparing-files/#213-config-yaml","title":"2.1.3 Config YAML","text":"There are several groups of parameters that are editable for the user to control the various aspects of the pipeline. These are :
Users can select duplicated peaks (dedup) or non-deduplicated peaks (no_dedup) through the user parameter.
dupstatus: \"dedup, no_dedup\"\n
"},{"location":"user-guide/preparing-files/#21312-macs2-additional-option","title":"2.1.3.1.2 Macs2 additional option","text":"MACS2 can be run with or without the control. adding a control will increase peak specificity Selecting \"Y\" for the macs2_control
will run the paired control sample provided in the sample manifest
Additional reference files may be added to the pipeline, if other species were to be used.
The absolute file paths which must be included are:
The following information must be included:
There are two manifests, one which required for all pipelines and one that is only required if running a differential analysis. These files describe information on the samples and desired contrasts. The paths of these files are defined in the snakemake_config.yaml file. These files are:
This manifest will include information to sample level information. It includes the following column headers:
An example sampleManifest file is shown below:
sampleName replicateNumber isControl controlName controlReplicateNumber path_to_R1 path_to_R2 53_H3K4me3 1 N HN6_IgG_rabbit_negative_control 1 PIPELINE_HOME/.test/53_H3K4me3_1.R1.fastq.gz PIPELINE_HOME/.test/53_H3K4me3_1.R2.fastq.gz 53_H3K4me3 2 N HN6_IgG_rabbit_negative_control 1 PIPELINE_HOME/.test/53_H3K4me3_2.R1.fastq.gz PIPELINE_HOME/.test/53_H3K4me3_2.R2.fastq.gz HN6_H3K4me3 1 N HN6_IgG_rabbit_negative_control 1 PIPELINE_HOME/.test/HN6_H3K4me3_1.R1.fastq.gz PIPELINE_HOME/.test/HN6_H3K4me3_1.R2.fastq.gz HN6_H3K4me3 2 N HN6_IgG_rabbit_negative_control 1 PIPELINE_HOME/.test/HN6_H3K4me3_2.R1.fastq.gz PIPELINE_HOME/.test/HN6_H3K4me3_2.R2.fastq.gz HN6_IgG_rabbit_negative_control 1 Y - - PIPELINE_HOME/.test/HN6_IgG_rabbit_negative_control_1.R1.fastq.gz PIPELINE_HOME/.test/HN6_IgG_rabbit_negative_control_1.R2.fastq.gz"},{"location":"user-guide/run/","title":"3. Running the Pipeline","text":"This should include all information about the various run commands provided within the pipeline.
"},{"location":"user-guide/run/#3-running-the-pipeline","title":"3. Running the Pipeline","text":""},{"location":"user-guide/run/#31-pipeline-overview","title":"3.1 Pipeline Overview","text":"The Snakemake workflow has a multiple options:
Usage: bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle -m/--runmode=<RUNMODE> -w/--workdir=<WORKDIR>\n1. RUNMODE: [Type: String] Valid options:\n *) init : initialize workdir\n *) run : run with slurm\n *) reset : DELETE workdir dir and re-init it\n *) dryrun : dry run snakemake to generate DAG\n *) unlock : unlock workdir if locked by snakemake\n *) runlocal : run without submitting to sbatch\n *) testrun: run on cluster with included test dataset\n2. WORKDIR: [Type: String]: Absolute or relative path to the output folder with write permissions.\n
"},{"location":"user-guide/run/#32-commands-explained","title":"3.2 Commands explained","text":"The following explains each of the command options:
To run any of these commands, follow the the syntax:
bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=COMMAND --workdir=/path/to/output/dir\n
"},{"location":"user-guide/run/#33-typical-workflow","title":"3.3 Typical Workflow","text":"A typical command workflow, running on the cluster, is as follows:
bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=init --workdir=/path/to/output/dir\n\nbash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=dryrun --workdir=/path/to/output/dir\n\nbash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=run --workdir=/path/to/output/dir\n
"},{"location":"user-guide/test-info/","title":"5. Running Test Data","text":"This should walk the user through the steps of running the pipeline using test data
"},{"location":"user-guide/test-info/#5-pipeline-tutorial","title":"5. Pipeline Tutorial","text":"Welcome to the CARLISLE Pipeline Tutorial!
"},{"location":"user-guide/test-info/#51-getting-started","title":"5.1 Getting Started","text":"Review the information on the Getting Started for a complete overview the pipeline. The tutorial below will use test data available on NIH Biowulf HPC only. All example code will assume you are running v1.0 of the pipeline, using test data available on GitHub.
A. Change working directory to the CARLISLE repository
B. Initialize Pipeline
bash ./path/to/dir/carlisle --runmode=init --workdir=/path/to/output/dir\n
"},{"location":"user-guide/test-info/#52-about-the-test-data","title":"5.2 About the test data","text":"This test data consists of sub-sampled inputs, consisting of two pairs of two replicate samples and one control. The reference to be used is hg38.
"},{"location":"user-guide/test-info/#53-submit-the-test-data","title":"5.3 Submit the test data","text":"Test data is included in the .test directory as well as the config directory.
A Run the test command to prepare the data, perform a dry-run and submit to the cluster
bash ./path/to/dir/carlisle --runmode=testrun --workdir=/path/to/output/dir\n
testrun
is as follows:Job stats:\njob count min threads max threads\n----------------------------- ------- ------------- -------------\nDESeq 24 1 1\nalign 9 56 56\nalignstats 9 2 2\nall 1 1 1\nbam2bg 9 4 4\ncreate_contrast_data_files 24 1 1\ncreate_contrast_peakcaller_files 12 1 1\ncreate_reference 1 32 32\ncreate_replicate_sample_table 1 1 1\ndiffbb 24 1 1\nfilter 18 2 2\nfindMotif 96 6 6\ngather_alignstats 1 1 1\ngo_enrichment 12 1 1\ngopeaks_broad 16 2 2\ngopeaks_narrow 16 2 2\nmacs2_broad 16 2 2\nmacs2_narrow 16 2 2\nmake_counts_matrix 24 1 1\nmultiqc 2 1 1\nqc_fastqc 9 1 1\nrose 96 2 2\nseacr_relaxed 16 2 2\nseacr_stringent 16 2 2\nspikein_assessment 1 1 1\ntrim 9 56 56\ntotal 478 1 56\n
"},{"location":"user-guide/test-info/#54-review-outputs","title":"5.4 Review outputs","text":"Review the expected outputs on the Output page. If there are errors, review and performing stesp described on the Troubleshooting page as needed.
"},{"location":"user-guide/troubleshooting/","title":"Troubleshooting","text":"This should include basic information on how to troubleshoot the pipeline. It should also include the main pipeliner developers contact information for users to utilize, as needed.
"},{"location":"user-guide/troubleshooting/#troubleshooting","title":"Troubleshooting","text":"Recommended steps to troubleshoot the pipeline.
"},{"location":"user-guide/troubleshooting/#11-email","title":"1.1 Email","text":"Check your email for an email regarding pipeline failure. You will receive an email from slurm@biowulf.nih.gov with the subject: Slurm Job_id=[#] Name=CARLISLE Failed, Run time [time], FAILED, ExitCode 1
"},{"location":"user-guide/troubleshooting/#12-review-the-log-files","title":"1.2 Review the log files","text":"Review the logs in two ways:
/path/to/results/dir/
and titled slurm-[jobid].out
. Reviewing this file will tell you what rule errored, and for any local SLURM jobs, provide error details/path/to/results/dir/logs/
. Each rule will include a .err
and .out
file, with the following formatting: {rulename}.{masterjobID}.{individualruleID}.{wildcards from the rule}.{out or err}
After addressing the issue, unlock the output directory, perform another dry-run and check the status of the pipeline, then resubmit to the cluster.
#unlock dir\nbash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=unlock --workdir=/path/to/output/dir\n\n#perform dry-run\nbash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=dryrun --workdir=/path/to/output/dir\n\n#submit to cluster\nbash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=run --workdir=/path/to/output/dir\n
"},{"location":"user-guide/troubleshooting/#14-contact-information","title":"1.4 Contact information","text":"If after troubleshooting, the error cannot be resolved, or if a bug is found, please create an issue and send and email to Samantha Chill.
"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..0f8724ef --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,3 @@ + +nextflow run /vf/users/CCBR/projects/techDev/CHAMPAGNE/main.nf -c nextflow.config -profile test_mm10,singularity -resume
eaa9f4ed7f801a300888341d7ab80d18
566be568-c63d-4177-a62f-42d83e30dbeb
These plots give an overview of the distribution of resource usage for each process.
+ +This table shows information about each task in the workflow. Use the search box on the right + to filter rows for specific values. Clicking headers will sort the table by that value and + scrolling side to side will reveal more columns.
+ +Loading report..
+ ++ Regex mode off + + +
+Paste two columns of a tab-delimited table here (eg. from Excel).
+First column should be the old name, second column the new name.
+ ++ Regex mode off + + +
++ Regex mode off + + +
+You can save the toolbox settings for this report to the browser.
+ +Choose a saved report profile from the dropdown box below:
+ +Please remember to cite the tools that you use in your analysis.
+To help with this, you can download publication details of the tools mentioned in this report:
+ + +This report was generated using MultiQC, version 1.15
+You can see a YouTube video describing how to use MultiQC reports here: + https://youtu.be/qPbIlO_KWN0
+For more information about MultiQC, including other videos and + extensive documentation, please visit http://multiqc.info
+You can report bugs, suggest improvements and find the source code for MultiQC on GitHub: + https://github.com/ewels/MultiQC
+MultiQC is published in Bioinformatics:
++ MultiQC: Summarize analysis results for multiple tools and samples in a single report+
+ Philip Ewels, Måns Magnusson, Sverker Lundin and Max Käller
+ Bioinformatics (2016)
+ doi: 10.1093/bioinformatics/btw354
+ PMID: 27312411 +
+ A modular tool to aggregate results from bioinformatics analyses across many samples into a single report. +
+ + + +This report has been generated by CCBR/CHAMPAGNE. For information about how to interpret these results, please see the documentation. ++ + + + + + + + + +
Report
+
+ generated on 2023-09-06, 17:28 EDT
+
+
+ based on data in:
+
+ /lscratch/7526425/nxf.FBQXPcOVYD
Sample Name | % Dups | % GC | M Seqs | M Reads Mapped | Frag Length | NSC | RSC |
---|---|---|---|---|---|---|---|
CTCF_ChIP_MEF_p20_1.aligned.filtered.dedup.bam.flagstat | 32.9 | ||||||
CTCF_ChIP_MEF_p20_1.spp.out | 135 | 1.73 | 2.84 | ||||
CTCF_ChIP_MEF_p20_1.trimmed.fastq.gz | 19.1% | 44% | 48.5 | ||||
CTCF_ChIP_MEF_p20_2.aligned.filtered.dedup.bam.flagstat | 13.6 | ||||||
CTCF_ChIP_MEF_p20_2.spp.out | 110 | 3.39 | 2.45 | ||||
CTCF_ChIP_MEF_p20_2.trimmed.fastq.gz | 23.5% | 51% | 36.6 | ||||
CTCF_ChIP_macrophage_p20_1.aligned.filtered.dedup.bam.flagstat | 38.1 | ||||||
CTCF_ChIP_macrophage_p20_1.spp.out | 130 | 1.51 | 2.66 | ||||
CTCF_ChIP_macrophage_p20_1.trimmed.fastq.gz | 19.3% | 45% | 59.8 | ||||
CTCF_ChIP_macrophage_p20_2.aligned.filtered.dedup.bam.flagstat | 10.6 | ||||||
CTCF_ChIP_macrophage_p20_2.spp.out | 115 | 2.08 | 2.12 | ||||
CTCF_ChIP_macrophage_p20_2.trimmed.fastq.gz | 9.3% | 45% | 22.3 | ||||
CTCF_ChIP_macrophage_p3_1.aligned.filtered.dedup.bam.flagstat | 31.1 | ||||||
CTCF_ChIP_macrophage_p3_1.spp.out | 135 | 1.53 | 2.65 | ||||
CTCF_ChIP_macrophage_p3_1.trimmed.fastq.gz | 17.2% | 43% | 50.2 | ||||
CTCF_ChIP_macrophage_p3_2.aligned.filtered.dedup.bam.flagstat | 16.0 | ||||||
CTCF_ChIP_macrophage_p3_2.spp.out | 105 | 1.93 | 2.57 | ||||
CTCF_ChIP_macrophage_p3_2.trimmed.fastq.gz | 26.0% | 51% | 44.4 | ||||
SRR3081748_1.fastq.gz | 20.2% | 45% | 60.4 | ||||
SRR3081749_1.fastq.gz | 10.0% | 45% | 22.7 | ||||
SRR3081750_1.fastq.gz | 17.6% | 44% | 50.5 | ||||
SRR3081751_1.fastq.gz | 36.2% | 51% | 51.4 | ||||
SRR3081752_1.fastq.gz | 19.4% | 44% | 48.7 | ||||
SRR3081753_1.fastq.gz | 24.5% | 51% | 36.9 | ||||
SRR3081772_1.fastq.gz | 14.0% | 39% | 27.6 | ||||
SRR3081773_1.fastq.gz | 16.9% | 39% | 38.2 | ||||
WCE_p20.aligned.filtered.dedup.bam.flagstat | 27.6 | ||||||
WCE_p20.spp.out | 0 | 1.01 | 1.00 | ||||
WCE_p20.trimmed.fastq.gz | 16.5% | 39% | 38.0 | ||||
WCE_p3.aligned.filtered.dedup.bam.flagstat | 20.9 | ||||||
WCE_p3.spp.out | 145 | 1.01 | 1.10 | ||||
WCE_p3.trimmed.fastq.gz | 13.6% | 39% | 27.5 |
SampleName | NUniqMappedReads | NRF | PBC1 | PBC2 | FragmentLength | NSC | RSC | Qtag |
---|---|---|---|---|---|---|---|---|
CTCF_ChIP_MEF_p20_2 | 31099296 | 0.8 | 0.9 | 7.0 | 200.0 | 1.93 | 2.6 | 2.0 |
CTCF_ChIP_MEF_p20_1 | 15983324 | 0.8 | 0.8 | 4.4 | 200.0 | 2.08 | 2.1 | 2.0 |
CTCF_ChIP_macrophage_p20_2 | 38105036 | 0.9 | 0.9 | 12.9 | 200.0 | 1.53 | 2.7 | 2.0 |
CTCF_ChIP_macrophage_p20_1 | 32917124 | 0.8 | 0.9 | 7.9 | 200.0 | 3.39 | 2.4 | 2.0 |
CTCF_ChIP_macrophage_p3_1 | 10554895 | 0.9 | 0.9 | 8.2 | 200.0 | 1.73 | 2.8 | 2.0 |
WCE_p20 | 13607337 | 0.9 | 0.9 | 18.0 | 200.0 | 1.51 | 2.7 | 2.0 |
CTCF_ChIP_macrophage_p3_2 | 27568678 | 0.7 | 0.8 | 4.2 | 200.0 | 1.01 | 1.0 | 1.0 |
WCE_p3 | 20856684 | NA | NA | NA | 200.0 | 1.01 | 1.1 | 1.0 |
FastQC is a quality control tool for high throughput sequence data, written by Simon Andrews at the Babraham Institute in Cambridge.
+ + + + +Sequence counts for each sample. Duplicate read counts are an estimate only.
This plot show the total number of reads, broken down into unique and duplicate +if possible (only more recent versions of FastQC give duplicate info).
+You can read more about duplicate calculation in the +FastQC documentation. +A small part has been copied here for convenience:
+Only sequences which first appear in the first 100,000 sequences +in each file are analysed. This should be enough to get a good impression +for the duplication levels in the whole file. Each sequence is tracked to +the end of the file to give a representative count of the overall duplication level.
+The duplication detection requires an exact sequence match over the whole length of +the sequence. Any reads over 75bp in length are truncated to 50bp for this analysis.
The mean quality value across each base position in the read.
To enable multiple samples to be plotted on the same graph, only the mean quality +scores are plotted (unlike the box plots seen in FastQC reports).
+Taken from the FastQC help:
+The y-axis on the graph shows the quality scores. The higher the score, the better +the base call. The background of the graph divides the y axis into very good quality +calls (green), calls of reasonable quality (orange), and calls of poor quality (red). +The quality of calls on most platforms will degrade as the run progresses, so it is +common to see base calls falling into the orange area towards the end of a read.
The number of reads with average quality scores. Shows if a subset of reads has poor quality.
From the FastQC help:
+The per sequence quality score report allows you to see if a subset of your +sequences have universally low quality values. It is often the case that a +subset of sequences will have universally poor quality, however these should +represent only a small percentage of the total sequences.
The proportion of each base position for which each of the four normal DNA bases has been called.
To enable multiple samples to be shown in a single plot, the base composition data +is shown as a heatmap. The colours represent the balance between the four bases: +an even distribution should give an even muddy brown colour. Hover over the plot +to see the percentage of the four bases under the cursor.
+To see the data as a line plot, as in the original FastQC graph, click on a sample track.
+From the FastQC help:
+Per Base Sequence Content plots out the proportion of each base position in a +file for which each of the four normal DNA bases has been called.
+In a random library you would expect that there would be little to no difference +between the different bases of a sequence run, so the lines in this plot should +run parallel with each other. The relative amount of each base should reflect +the overall amount of these bases in your genome, but in any case they should +not be hugely imbalanced from each other.
+It's worth noting that some types of library will always produce biased sequence +composition, normally at the start of the read. Libraries produced by priming +using random hexamers (including nearly all RNA-Seq libraries) and those which +were fragmented using transposases inherit an intrinsic bias in the positions +at which reads start. This bias does not concern an absolute sequence, but instead +provides enrichement of a number of different K-mers at the 5' end of the reads. +Whilst this is a true technical bias, it isn't something which can be corrected +by trimming and in most cases doesn't seem to adversely affect the downstream +analysis.
The average GC content of reads. Normal random library typically have a + roughly normal distribution of GC content.
From the FastQC help:
+This module measures the GC content across the whole length of each sequence +in a file and compares it to a modelled normal distribution of GC content.
+In a normal random library you would expect to see a roughly normal distribution +of GC content where the central peak corresponds to the overall GC content of +the underlying genome. Since we don't know the the GC content of the genome the +modal GC content is calculated from the observed data and used to build a +reference distribution.
+An unusually shaped distribution could indicate a contaminated library or +some other kinds of biased subset. A normal distribution which is shifted +indicates some systematic bias which is independent of base position. If there +is a systematic bias which creates a shifted normal distribution then this won't +be flagged as an error by the module since it doesn't know what your genome's +GC content should be.
The percentage of base calls at each position for which an N
was called.
From the FastQC help:
+If a sequencer is unable to make a base call with sufficient confidence then it will
+normally substitute an N
rather than a conventional base call. This graph shows the
+percentage of base calls at each position for which an N
was called.
It's not unusual to see a very low proportion of Ns appearing in a sequence, especially +nearer the end of a sequence. However, if this proportion rises above a few percent +it suggests that the analysis pipeline was unable to interpret the data well enough to +make valid base calls.
The distribution of fragment sizes (read lengths) found. + See the FastQC help
The relative level of duplication found for every sequence.
From the FastQC Help:
+In a diverse library most sequences will occur only once in the final set. +A low level of duplication may indicate a very high level of coverage of the +target sequence, but a high level of duplication is more likely to indicate +some kind of enrichment bias (eg PCR over amplification). This graph shows +the degree of duplication for every sequence in a library: the relative +number of sequences with different degrees of duplication.
+Only sequences which first appear in the first 100,000 sequences +in each file are analysed. This should be enough to get a good impression +for the duplication levels in the whole file. Each sequence is tracked to +the end of the file to give a representative count of the overall duplication level.
+The duplication detection requires an exact sequence match over the whole length of +the sequence. Any reads over 75bp in length are truncated to 50bp for this analysis.
+In a properly diverse library most sequences should fall into the far left of the +plot in both the red and blue lines. A general level of enrichment, indicating broad +oversequencing in the library will tend to flatten the lines, lowering the low end +and generally raising other categories. More specific enrichments of subsets, or +the presence of low complexity contaminants will tend to produce spikes towards the +right of the plot.
The total amount of overrepresented sequences found in each library.
FastQC calculates and lists overrepresented sequences in FastQ files. It would not be +possible to show this for all samples in a MultiQC report, so instead this plot shows +the number of sequences categorized as over represented.
+Sometimes, a single sequence may account for a large number of reads in a dataset. +To show this, the bars are split into two: the first shows the overrepresented reads +that come from the single most common sequence. The second shows the total count +from all remaining overrepresented sequences.
+From the FastQC Help:
+A normal high-throughput library will contain a diverse set of sequences, with no +individual sequence making up a tiny fraction of the whole. Finding that a single +sequence is very overrepresented in the set either means that it is highly biologically +significant, or indicates that the library is contaminated, or not as diverse as you expected.
+FastQC lists all of the sequences which make up more than 0.1% of the total. +To conserve memory only sequences which appear in the first 100,000 sequences are tracked +to the end of the file. It is therefore possible that a sequence which is overrepresented +but doesn't appear at the start of the file for some reason could be missed by this module.
The cumulative percentage count of the proportion of your + library which has seen each of the adapter sequences at each position.
Note that only samples with ≥ 0.1% adapter contamination are shown.
+There may be several lines per sample, as one is shown for each adapter +detected in the file.
+From the FastQC Help:
+The plot shows a cumulative percentage count of the proportion +of your library which has seen each of the adapter sequences at each position. +Once a sequence has been seen in a read it is counted as being present +right through to the end of the read so the percentages you see will only +increase as the read length goes on.
Status for each FastQC section showing whether results seem entirely normal (green), +slightly abnormal (orange) or very unusual (red).
FastQC assigns a status for each section of the report. +These give a quick evaluation of whether the results of the analysis seem +entirely normal (green), slightly abnormal (orange) or very unusual (red).
+It is important to stress that although the analysis results appear to give a pass/fail result, +these evaluations must be taken in the context of what you expect from your library. +A 'normal' sample as far as FastQC is concerned is random and diverse. +Some experiments may be expected to produce libraries which are biased in particular ways. +You should treat the summary evaluations therefore as pointers to where you should concentrate +your attention and understand why your library may not look random and diverse.
+Specific guidance on how to interpret the output of each module can be found in the relevant +report section, or in the FastQC help.
+In this heatmap, we summarise all of these into a single heatmap for a quick overview. +Note that not all FastQC sections have plots in MultiQC reports, but all status checks +are shown in this heatmap.
Samtools is a suite of programs for interacting with high-throughput sequencing data.DOI: 10.1093/bioinformatics/btp352.
+ + + + +This module parses the output from samtools flagstat
. All numbers in millions.
The samtools idxstats
tool counts the number of mapped reads per chromosome / contig. Chromosomes with < 0.1% of the total aligned reads are omitted from this plot.
deepTools is a suite of tools to process and analyze deep sequencing data.DOI: 10.1093/nar/gkw257.
+ + + + +Pairwise correlations of samples based on distribution of sequence reads
PCA plot with the top two principal components calculated based on genome-wide distribution of sequence reads
Signal fingerprint according to plotFingerprint
Various quality metrics returned by plotFingerprint
Accumulated view of the distribution of sequence reads related to the closest annotated gene. +All annotated genes have been normalized to the same size.
This should set the stage for all of the pipeline requirements. Examples are listed below.
The CARLISLE github repository is stored locally, and will be used for project deployment. Multiple projects can be deployed from this one point simultaneously, without concern.
The CARLISLE Pipelie beings with raw FASTQ files and performs trimming followed by alignment using BOWTIE2. Data is then normalized through either the use of an user-species species (IE E.Coli) spike-in control or through the determined library size. Peaks are then called using MACS2, SEACR, and GoPEAKS with various options selected by the user. Peaks are then annotated, and summarized into reports. If designated, differential analysis is performed using DESEQ2. QC reports are also generated with each project using FASTQC and MULTIQC. Annotations are added using HOMER and ROSE. GSEA Enrichment analysis predictions are added using CHIPENRICH.
The following are sub-commands used within CARLISLE:
CARLISLE has several dependencies listed below. These dependencies can be installed by a sysadmin. All dependencies will be automatically loaded if running from Biowulf.
CARLISLE has been exclusively tested on Biowulf HPC. Login to the cluster's head node and move into the pipeline location.
# ssh into cluster's head node
+ssh -Y $USER@biowulf.nih.gov
+
An interactive session should be started before performing any of the pipeline sub-commands, even if the pipeline is to be executed on the cluster.
# Grab an interactive node
+sinteractive --time=12:00:00 --mem=8gb --cpus-per-task=4 --pty bash
+
This should include all pertitant information about output files, including extensions that differentiate files. An example is provided below.
The following directories are created under the WORKDIR/results directory:
├── alignment_stats
+├── bam
+├── peaks
+│ ├── 0.05
+│ │ ├── contrasts
+│ │ │ ├── contrast_id1.dedup_status
+│ │ │ └── contrast_id2.dedup_status
+│ │ ├── gopeaks
+│ │ │ ├── annotation
+│ │ │ │ ├── go_enrichment
+│ │ │ │ │ ├── contrast_id1.dedup_status.go_enrichment_tables
+│ │ │ │ │ └── contrast_id2.dedup_status.go_enrichment_html_report
+│ │ │ │ ├── homer
+│ │ │ │ │ ├── replicate_id1_vs_control_id.dedup_status.gopeaks_broad.motifs
+│ │ │ │ │ │ ├── homerResults
+│ │ │ │ │ │ └── knownResults
+│ │ │ │ │ ├── replicate_id1_vs_control_id.dedup_status.gopeaks_narrow.motifs
+│ │ │ │ │ │ ├── homerResults
+│ │ │ │ │ │ └── knownResults
+│ │ │ │ │ ├── replicate_id2_vs_control_id.dedup_status.gopeaks_broad.motifs
+│ │ │ │ │ │ ├── homerResults
+│ │ │ │ │ │ └── knownResults
+│ │ │ │ │ ├── replicate_id2_vs_control_id.dedup_status.gopeaks_narrow.motifs
+│ │ │ │ │ │ ├── homerResults
+│ │ │ │ │ │ └── knownResults
+│ │ │ │ └── rose
+│ │ │ │ ├── replicate_id1_vs_control_id.dedup_status.gopeaks_broad.12500
+│ │ │ │ ├── replicate_id1_vs_control_id.dedup_status.gopeaks_narrow.12500
+│ │ │ │ ├── replicate_id2_vs_control_id.dedup_status.dedup.gopeaks_broad.12500
+│ │ │ │ ├── replicate_id2_vs_control_id.dedup_status.dedup.gopeaks_narrow.12500
+│ │ │ └── peak_output
+│ │ ├── macs2
+│ │ │ ├── annotation
+│ │ │ │ ├── go_enrichment
+│ │ │ │ │ ├── contrast_id1.dedup_status.go_enrichment_tables
+│ │ │ │ │ └── contrast_id2.dedup_status.go_enrichment_html_report
+│ │ │ │ ├── homer
+│ │ │ │ │ ├── replicate_id1_vs_control_id.dedup_status.macs2_narrow.motifs
+│ │ │ │ │ │ ├── homerResults
+│ │ │ │ │ │ └── knownResults
+│ │ │ │ │ ├── replicate_id1_vs_control_id.dedup_status.macs2_broad.motifs
+│ │ │ │ │ │ ├── homerResults
+│ │ │ │ │ │ └── knownResults
+│ │ │ │ │ ├── replicate_id2_vs_control_id.dedup_status.macs2_narrow.motifs
+│ │ │ │ │ │ ├── homerResults
+│ │ │ │ │ │ └── knownResults
+│ │ │ │ │ ├── replicate_id2_vs_control_id.dedup_status.macs2_broad.motifs
+│ │ │ │ │ │ ├── homerResults
+│ │ │ │ │ │ └── knownResults
+│ │ │ │ └── rose
+│ │ │ │ ├── replicate_id1_vs_control_id.dedup_status.macs2_broad.12500
+│ │ │ │ ├── replicate_id1_vs_control_id.dedup_status.macs2_narrow.12500
+│ │ │ │ ├── replicate_id2_vs_control_id.dedup_status.macs2_broad.12500
+│ │ │ │ ├── replicate_id2_vs_control_id.dedup_status.macs2_narrow.12500
+│ │ │ └── peak_output
+
This should describe any input files needed, including config files, manifest files, and sample files. An example is provided below.
The pipeline is controlled through editing configuration and manifest files. Defaults are found in the /WORKDIR/config and /WORKDIR/manifest directories, after initialization.
The configuration files control parameters and software of the pipeline. These files are listed below:
The cluster configuration file dictates the resources to be used during submission to Biowulf HPC. There are two different ways to control these parameters - first, to control the default settings, and second, to create or edit individual rules. These parameters should be edited with caution, after significant testing.
The tools configuration file dictates the version of each software or program that is being used in the pipeline.
There are several groups of parameters that are editable for the user to control the various aspects of the pipeline. These are :
Users can select duplicated peaks (dedup) or non-deduplicated peaks (no_dedup) through the user parameter.
dupstatus: "dedup, no_dedup"
+
MACS2 can be run with or without the control. adding a control will increase peak specificity Selecting "Y" for the macs2_control
will run the paired control sample provided in the sample manifest
Additional reference files may be added to the pipeline, if other species were to be used.
The absolute file paths which must be included are:
The following information must be included:
There are two manifests, one which required for all pipelines and one that is only required if running a differential analysis. These files describe information on the samples and desired contrasts. The paths of these files are defined in the snakemake_config.yaml file. These files are:
This manifest will include information to sample level information. It includes the following column headers:
An example sampleManifest file is shown below:
sampleName | replicateNumber | isControl | controlName | controlReplicateNumber | path_to_R1 | path_to_R2 |
---|---|---|---|---|---|---|
53_H3K4me3 | 1 | N | HN6_IgG_rabbit_negative_control | 1 | PIPELINE_HOME/.test/53_H3K4me3_1.R1.fastq.gz | PIPELINE_HOME/.test/53_H3K4me3_1.R2.fastq.gz |
53_H3K4me3 | 2 | N | HN6_IgG_rabbit_negative_control | 1 | PIPELINE_HOME/.test/53_H3K4me3_2.R1.fastq.gz | PIPELINE_HOME/.test/53_H3K4me3_2.R2.fastq.gz |
HN6_H3K4me3 | 1 | N | HN6_IgG_rabbit_negative_control | 1 | PIPELINE_HOME/.test/HN6_H3K4me3_1.R1.fastq.gz | PIPELINE_HOME/.test/HN6_H3K4me3_1.R2.fastq.gz |
HN6_H3K4me3 | 2 | N | HN6_IgG_rabbit_negative_control | 1 | PIPELINE_HOME/.test/HN6_H3K4me3_2.R1.fastq.gz | PIPELINE_HOME/.test/HN6_H3K4me3_2.R2.fastq.gz |
HN6_IgG_rabbit_negative_control | 1 | Y | - | - | PIPELINE_HOME/.test/HN6_IgG_rabbit_negative_control_1.R1.fastq.gz | PIPELINE_HOME/.test/HN6_IgG_rabbit_negative_control_1.R2.fastq.gz |
This should include all information about the various run commands provided within the pipeline.
The Snakemake workflow has a multiple options:
Usage: bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle -m/--runmode=<RUNMODE> -w/--workdir=<WORKDIR>
+1. RUNMODE: [Type: String] Valid options:
+ *) init : initialize workdir
+ *) run : run with slurm
+ *) reset : DELETE workdir dir and re-init it
+ *) dryrun : dry run snakemake to generate DAG
+ *) unlock : unlock workdir if locked by snakemake
+ *) runlocal : run without submitting to sbatch
+ *) testrun: run on cluster with included test dataset
+2. WORKDIR: [Type: String]: Absolute or relative path to the output folder with write permissions.
+
The following explains each of the command options:
To run any of these commands, follow the the syntax:
bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=COMMAND --workdir=/path/to/output/dir
+
A typical command workflow, running on the cluster, is as follows:
bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=init --workdir=/path/to/output/dir
+
+bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=dryrun --workdir=/path/to/output/dir
+
+bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=run --workdir=/path/to/output/dir
+
This should walk the user through the steps of running the pipeline using test data
Welcome to the CARLISLE Pipeline Tutorial!
Review the information on the Getting Started for a complete overview the pipeline. The tutorial below will use test data available on NIH Biowulf HPC only. All example code will assume you are running v1.0 of the pipeline, using test data available on GitHub.
A. Change working directory to the CARLISLE repository
B. Initialize Pipeline
bash ./path/to/dir/carlisle --runmode=init --workdir=/path/to/output/dir
+
This test data consists of sub-sampled inputs, consisting of two pairs of two replicate samples and one control. The reference to be used is hg38.
Test data is included in the .test directory as well as the config directory.
A Run the test command to prepare the data, perform a dry-run and submit to the cluster
bash ./path/to/dir/carlisle --runmode=testrun --workdir=/path/to/output/dir
+
testrun
is as follows:Job stats:
+job count min threads max threads
+----------------------------- ------- ------------- -------------
+DESeq 24 1 1
+align 9 56 56
+alignstats 9 2 2
+all 1 1 1
+bam2bg 9 4 4
+create_contrast_data_files 24 1 1
+create_contrast_peakcaller_files 12 1 1
+create_reference 1 32 32
+create_replicate_sample_table 1 1 1
+diffbb 24 1 1
+filter 18 2 2
+findMotif 96 6 6
+gather_alignstats 1 1 1
+go_enrichment 12 1 1
+gopeaks_broad 16 2 2
+gopeaks_narrow 16 2 2
+macs2_broad 16 2 2
+macs2_narrow 16 2 2
+make_counts_matrix 24 1 1
+multiqc 2 1 1
+qc_fastqc 9 1 1
+rose 96 2 2
+seacr_relaxed 16 2 2
+seacr_stringent 16 2 2
+spikein_assessment 1 1 1
+trim 9 56 56
+total 478 1 56
+
Review the expected outputs on the Output page. If there are errors, review and performing stesp described on the Troubleshooting page as needed.
This should include basic information on how to troubleshoot the pipeline. It should also include the main pipeliner developers contact information for users to utilize, as needed.
Recommended steps to troubleshoot the pipeline.
Check your email for an email regarding pipeline failure. You will receive an email from slurm@biowulf.nih.gov with the subject: Slurm Job_id=[#] Name=CARLISLE Failed, Run time [time], FAILED, ExitCode 1
Review the logs in two ways:
/path/to/results/dir/
and titled slurm-[jobid].out
. Reviewing this file will tell you what rule errored, and for any local SLURM jobs, provide error details/path/to/results/dir/logs/
. Each rule will include a .err
and .out
file, with the following formatting: {rulename}.{masterjobID}.{individualruleID}.{wildcards from the rule}.{out or err}
After addressing the issue, unlock the output directory, perform another dry-run and check the status of the pipeline, then resubmit to the cluster.
#unlock dir
+bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=unlock --workdir=/path/to/output/dir
+
+#perform dry-run
+bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=dryrun --workdir=/path/to/output/dir
+
+#submit to cluster
+bash ./data/CCBR_Pipeliner/Pipelines/CARLISLE/carlisle --runmode=run --workdir=/path/to/output/dir
+
If after troubleshooting, the error cannot be resolved, or if a bug is found, please create an issue and send and email to Samantha Chill.