From 1b954efc006ef15097be90ac185941af3530fc28 Mon Sep 17 00:00:00 2001 From: "Josh L. Espinoza" Date: Tue, 16 May 2023 13:02:57 -0700 Subject: [PATCH] More walkthroughs --- walkthroughs/README.md | 17 ++-- walkthroughs/adapting_commands_for_docker.md | 99 +++++++++++++++++++ ...specting_for_biosynthetic_gene_clusters.md | 2 +- 3 files changed, 110 insertions(+), 8 deletions(-) create mode 100644 walkthroughs/adapting_commands_for_docker.md diff --git a/walkthroughs/README.md b/walkthroughs/README.md index fcc8099..3b66c45 100644 --- a/walkthroughs/README.md +++ b/walkthroughs/README.md @@ -31,13 +31,16 @@ sbatch -J ${N} -N 1 -c ${N_JOBS} --ntasks-per-node=1 -o logs/${N}.o -e logs/${N} #### Available walkthroughs: -* [Downloading and preprocessing fastq files](download_and_preprocess_reads.md) - Explains how to download reads from NCBI and run *VEBA's* `preprocess.py` module to decontaminate either metagenomic and/or metatranscriptomic reads. -* [Complete end-to-end metagenomics analysis](end-to-end_metagenomics.md) - Goes through assembling metagenomic reads, binning, clustering, classification, and annotation. We also show how to use the unbinned contigs in a pseudo-coassembly with guidelines on when it's a good idea to go this route. -* [Recovering viruses from metatranscriptomics](recovering_viruses_from_metatranscriptomics.md) - Goes through assembling metatranscriptomic reads, viral binning, clustering, and classification. -* [Read mapping and counts tables](read_mapping_and_counts_tables.md) - Read mapping and generating counts tables at the contig, MAG, SLC, ORF, and SSO levels. -* [Phylogenetic inference](phylogenetic_inference.md) - Phylogenetic inference of eukaryotic diatoms. -* [Setting up *bona fide* coassemblies for metagenomics or metatranscriptomics](setting_up_coassemblies.md) - In the case where all samples are of low depth, it may be useful to use coassembly instead of sample-specific approaches. This walkthrough goes through concatenating reads, creating a reads table, coassembly of concatenated reads, aligning sample-specific reads to the coassembly for multiple sorted BAM files, and mapping reads for scaffold/transcript-level counts. Please note that a coassembly differs from the pseudo-coassembly concept introduced in the VEBA publication. For more information regarding the differences between *bona fide* coassembly and pseud-coassembly, please refer to [*23. What's the difference between a coassembly and a pseudo-coassembly?*](https://github.com/jolespin/veba/blob/main/FAQ.md#23-whats-the-difference-between-a-coassembly-and-a-pseudo-coassembly). -* [Bioprospecting for biosynthetic gene clusters](bioprospecting_for_biosynthetic_gene_clusters.md) - Detecting biosynthetic gene clusters (BGC) with and scoring novelty of BGCs. +* **[Downloading and preprocessing fastq files](download_and_preprocess_reads.md)** - Explains how to download reads from NCBI and run *VEBA's* `preprocess.py` module to decontaminate either metagenomic and/or metatranscriptomic reads. +* **[Complete end-to-end metagenomics analysis](end-to-end_metagenomics.md)** - Goes through assembling metagenomic reads, binning, clustering, classification, and annotation. We also show how to use the unbinned contigs in a pseudo-coassembly with guidelines on when it's a good idea to go this route. +* **[Recovering viruses from metatranscriptomics](recovering_viruses_from_metatranscriptomics.md)** - Goes through assembling metatranscriptomic reads, viral binning, clustering, and classification. +* **[Read mapping and counts tables](read_mapping_and_counts_tables.md)** - Read mapping and generating counts tables at the contig, MAG, SLC, ORF, and SSO levels. +* **[Phylogenetic inference](phylogenetic_inference.md)** - Phylogenetic inference of eukaryotic diatoms. +* **[Setting up *bona fide* coassemblies for metagenomics or metatranscriptomics](setting_up_coassemblies.md)** - In the case where all samples are of low depth, it may be useful to use coassembly instead of sample-specific approaches. This walkthrough goes through concatenating reads, creating a reads table, coassembly of concatenated reads, aligning sample-specific reads to the coassembly for multiple sorted BAM files, and mapping reads for scaffold/transcript-level counts. Please note that a coassembly differs from the pseudo-coassembly concept introduced in the VEBA publication. For more information regarding the differences between *bona fide* coassembly and pseud-coassembly, please refer to [*23. What's the difference between a coassembly and a pseudo-coassembly?*](https://github.com/jolespin/veba/blob/main/FAQ.md#23-whats-the-difference-between-a-coassembly-and-a-pseudo-coassembly). +* **[Bioprospecting for biosynthetic gene clusters](bioprospecting_for_biosynthetic_gene_clusters.md)** - Detecting biosynthetic gene clusters (BGC) with and scoring novelty of BGCs. +* **[Converting counts tables](converting_counts_tables.md)** - Convert your counts table (with or without metadata) to [anndata](https://anndata.readthedocs.io/en/latest/index.html) or [biom](https://biom-format.org/) format. Also supports [Pandas pickle](https://pandas.pydata.org/docs/reference/api/pandas.read_pickle.html) format. +* **[Adapting commands for Docker](adapting_commands_for_docker.md)** - Explains how to download and use Docker for running VEBA. + ___________________________________________ diff --git a/walkthroughs/adapting_commands_for_docker.md b/walkthroughs/adapting_commands_for_docker.md new file mode 100644 index 0000000..867fd33 --- /dev/null +++ b/walkthroughs/adapting_commands_for_docker.md @@ -0,0 +1,99 @@ +### Adapting commands for use with Docker +Containerization is a solution to using VEBA on any system and portability for resources such as AWS or Google Cloud. To address this, I've been containerizing all of the modules. Here is the guide for using these containers. + +_____________________________________________________ + +#### Steps: + +1. Install Docker Engine +2. Pull the image for the module +3. Run Docker container +4. Get the results + +_____________________________________________________ + + +#### 1. Install Docker Engine + +Refer to the [Docker documentation](https://docs.docker.com/engine/install/). + + +#### 2. Pull Docker image for the module + +Let's say you wanted to use the `assembly.py` module. Download the Docker image as so: + +``` +VERSION=1.1.2 +docker image pull veba/assembly:1.1.2 +``` + +#### 3. Run Docker container + +One key difference with running a Docker container is that you need to specify the path IN the Docker container but it's pretty simple. Basically, we link a local directory to a container directory using the `--volume` argument. + +For example, here's how we would run the `assembly.py` module. First let's just look at the options: + +```bash +# Version +VERSION=1.1.2 + +# Image +DOCKER_IMAGE="veba/assembly:${VERSION}" + +docker run --name VEBA-mapping --rm -it ${DOCKER_IMAGE} -c "assembly.py -h" +``` + +If we wanted to run it interactively, start the container with `bash` (it automatically loads the appropriate `conda` environment): + +``` +docker run --name VEBA-mapping --rm -it ${DOCKER_IMAGE} -c "bash" +``` + +Though, it's the `assembly.py` module so it you're running anything other than a toy dataset, then you probably want to run it on the grid so you can go to do something else. + +Below, we specify the `LOCAL_WORKING_DIRECTORY` which is just the current local directory. We also need to specify the `CONTAINER_WORKING_DIRECTORY` which will be `/data/` on the volume. We link these with the `--volume` argument so anything created in the `CONTAINER_WORKING_DIRECTORY` will get mirrored into the `LOCAL_WORKING_DIRECTORY`. That is where we want to put the output files. + +Note: If we don't specify the `${CONTAINER_WORKING_DIRECTORY}` prefix for the output then the output will be stranded in the container. + +```bash + +# Directories +LOCAL_WORKING_DIRECTORY=. +CONTAINER_WORKING_DIRECTORY=/data/ + +# Inputs +ID=S1 +R1=${ID}_1.fastq.gz +R2=${ID}_2.fastq.gz + +# Command +CMD="assembly.py -1 ${CONTAINER_INPUT_DIRECTORY}/${R1} -2 ${CONTAINER_INPUT_DIRECTORY}/${R2} -n ${ID} -o ${CONTAINER_WORKING_DIRECTORY}/veba_output/assembly/" + +docker run \ + --name VEBA-assembly__${ID} \ + --rm \ + --volume ${LOCAL_WORKING_DIRECTORY}:${CONTAINER_WORKING_DIRECTORY} \ + ${DOCKER_IMAGE} \ + -c "${CMD}" \ +``` + +#### 4. Get the results + +Now that the container has finished running the commands. Let's view the results. If you don't have `tree` in your environment, you should download it because it's useful. `mamba install -c conda-forge tree` + +To view it all: + +``` +tree veba_output/assembly/${ID}/ +``` + +or just the output: + +``` +ls -lhS veba_output/assembly/${ID}/output +``` + + +#### Next steps: + +Whatever you want to do. \ No newline at end of file diff --git a/walkthroughs/bioprospecting_for_biosynthetic_gene_clusters.md b/walkthroughs/bioprospecting_for_biosynthetic_gene_clusters.md index 8016ed4..6697d20 100644 --- a/walkthroughs/bioprospecting_for_biosynthetic_gene_clusters.md +++ b/walkthroughs/bioprospecting_for_biosynthetic_gene_clusters.md @@ -69,4 +69,4 @@ The following output files will produced: #### Next steps: -Synthesize products and save humanity. \ No newline at end of file +Synthesize products, preserve the ecosystem, and save humanity. \ No newline at end of file