-
Notifications
You must be signed in to change notification settings - Fork 15
/
nf_fastqc
executable file
·123 lines (83 loc) · 4.96 KB
/
nf_fastqc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
#!/usr/bin/env nextflow
nextflow.enable.dsl=2
// last modified 12 Oct 2020
params.outdir = "."
params.fastqc_args = ''
params.verbose = false
params.single_end = false // default mode is auto-detect. NOTE: params are handed over automatically
if (params.verbose){
println ("[WORKFLOW] FASTQC ARGS: " + params.fastqc_args)
}
params.help = false
// Show help message and exit
if (params.help){
helpMessage()
exit 0
}
// Example command
// nf_fastqc --fastqc_args="--nogroup --adapter ~/adapter_list_with_contamination_one_off.txt" *fastq.gz
include { makeFilesChannel; getFileBaseNames } from './nf_modules/files.mod.nf'
include { FASTQC } from './nf_modules/fastqc.mod.nf'
file_ch = makeFilesChannel(args)
workflow {
main:
FASTQC(file_ch, params.outdir, params.fastqc_args, params.verbose)
}
// Since workflows with very long command lines tend to fail to get rendered at all, I was experimenting with a
// minimal execution summary report so we at least know what the working directory was...
workflow.onComplete {
def msg = """\
Pipeline execution summary
---------------------------
Completed at: ${workflow.complete}
Duration : ${workflow.duration}
Success : ${workflow.success}
workDir : ${workflow.workDir}
exit status : ${workflow.exitStatus}
"""
.stripIndent()
sendMail(to: "${workflow.userName}@babraham.ac.uk", subject: 'Minimal pipeline execution report', body: msg)
}
def helpMessage() {
log.info"""
>>
SYNOPSIS:
This workflow takes in a list of filenames (in FastQ format) and runs FastQC on the files (by default on the Babraham stone cluster).
Running this stand-alone workflow executes FastQC with default parameters (i.e. 'fastqc file.fastq.gz'). To add additional parameters,
please consider specifying tool-specific arguments that are compatible with FastQC (see '--fastqc_args' below).
==============================================================================================================
USAGE:
nf_fastqc [options] <input files>
Mandatory arguments:
====================
<input files> List of input files, e.g. '*fastq.gz' or '*fq.gz'. All supplied files are processed with FastQC
irrespective of whether they were detected to be single-end files or paired-end file pairs.
Tool-Specific Options:
======================
--fastqc_args="[str]" This option can take any number of options that are compatible with FastQC to modify its default
behaviour. For more detailed information on available options please refer to the FastQC documentation,
or run 'fastqc --help' on the command line. As an example, to run FastQC without grouping of bases if
reads are >50bp and use a specific file with non-default adapter sequences, use
' --fastqc_args="--nogroup --adapters ./non_default_adapter_file.txt" '. Please note that the format:
="your options" needs to be strictly adhered to in order to work correctly. [Default: None]
Other Options:
==============
--outdir [str] Path to the output directory. [Default: current working directory]
--verbose More verbose status messages. [Default: OFF]
--help Displays this help message and exits.
Workflow Options:
=================
Please note the single '-' hyphen for the following options!
-resume If a pipeline workflow has been interrupted or stopped (e.g. by accidentally closing a laptop),
this option will attempt to resume the workflow at the point it got interrupted by using
Nextflow's caching mechanism. This may save a lot of time.
-bg Sends the entire workflow into the background, thus disconnecting it from the terminal session.
This option launches a daemon process (which will keep running on the headnode) that watches over
your workflow, and submits new jobs to the SLURM queue as required. Use this option for big pipeline
jobs, or whenever you do not want to watch the status progress yourself. Upon completion, the
pipeline will send you an email with the job details. This option is HIGHLY RECOMMENDED!
-process.executor=local Temporarily changes where the workflow is executed to the 'local' machine. See also Nextflow config
file for more details. [Default: slurm]
<<
""".stripIndent()
}