-
Notifications
You must be signed in to change notification settings - Fork 15
/
nf_bam2bigwig
executable file
·131 lines (92 loc) · 5.1 KB
/
nf_bam2bigwig
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
#!/usr/bin/env nextflow
nextflow.enable.dsl=2
params.outdir = "."
params.genome = ""
params.verbose = false
params.single_end = false // default mode is auto-detect. NOTE: params are handed over automatically
params.bamcoverage_args = '--normalizeUsing CPM --binSize 10'
params.help = false
// Show help message and exit
if (params.help){
helpMessage()
exit 0
}
if (params.verbose){
println ("[WORKFLOW] BAM2BIGWIG ARGS: " + params.bamcoverage_args)
}
include { makeFilesChannel; getFileBaseNames } from './nf_modules/files.mod.nf'
include { BAMCOVERAGE } from './nf_modules/bamcoverage.mod.nf'
index_files = []
for (arg in args) {
index_files.add(arg+".bai")
}
file_ch = Channel.fromPath(args) // bam2bigwig expects just the path to be passed in
// We need to build a channel for the index files. We don't pass these in but they will
// be in the same location as the BAM files, but with a .bai extension. We therefore just
// build a new channel from the bam names with .bai stuck on the end.
index_ch = Channel.fromPath(index_files)
// file_ch.subscribe{ println "Got: $it" }
workflow {
main:
BAMCOVERAGE (file_ch, index_ch, params.outdir, params.bamcoverage_args, params.verbose)
}
// Since workflows with very long command lines tend to fail to get rendered at all, I was experimenting with a
// minimal execution summary report so we at least know what the working directory was...
workflow.onComplete {
def msg = """\
Pipeline execution summary
---------------------------
Jobname : ${workflow.runName}
Completed at: ${workflow.complete}
Duration : ${workflow.duration}
Success : ${workflow.success}
workDir : ${workflow.workDir}
exit status : ${workflow.exitStatus}
"""
.stripIndent()
sendMail(to: "${workflow.userName}@babraham.ac.uk", subject: 'Minimal pipeline execution report', body: msg)
}
def helpMessage() {
log.info"""
>>
SYNOPSIS:
This workflow takes in a list of sorted BAM files, and then uses
the deeptools bamCoverage program to turn these into a bigWig file.
NOTE: The BAM files coming in to this pipeline *must* be sorted, and
must have an accompanying .bai index file with them. You only specify
the BAM file name and the index file name will be inferred.
If your BAM files aren't sorted then you can sort them using the
nf_sortIndexBAM pipeline first.
==============================================================================================================
USAGE:
nf_bam2bigwig [options] <input files>
Mandatory arguments:
====================
<input_files> List of sorted, indexed input BAM files, e.g. '*bam'. NB .bai files must also be present
Tool-specific options:
======================
--bamcoverage_args="[str]" This option can take any number of options that are compatible with 'bamCoverage' to modify its default
behaviour. For more detailed information on available options please refer to the deeptools documentation,
or run 'bamCoverage --help' on the command line. Please note that the format ="your options" needs to be
strictly adhered to in order to work correctly. [Default: '--normalizeUsing CPM --binSize 10']
Other options:
==============
--outdir [str] Path to the output directory. [Default: current working directory]
--verbose More verbose status messages. [Default: OFF]
--help Displays this help message and exits.
Workflow options:
=================
Please note the single '-' hyphen for the following options!
-resume If a pipeline workflow has been interrupted or stopped (e.g. by accidentally closing a laptop),
this option will attempt to resume the workflow at the point it got interrupted by using
Nextflow's caching mechanism. This may save a lot of time.
-bg Sends the entire workflow into the background, thus disconnecting it from the terminal session.
This option launches a daemon process (which will keep running on the headnode) that watches over
your workflow, and submits new jobs to the SLURM queue as required. Use this option for big pipeline
jobs, or whenever you do not want to watch the status progress yourself. Upon completion, the
pipeline will send you an email with the job details. This option is HIGHLY RECOMMENDED!
-process.executor=local Temporarily changes where the workflow is executed to the 'local' machine. See also the nextflow.config
file for more details. [Default: slurm]
<<
""".stripIndent()
}