You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Decide what to do with coverage reporting in presence of large deletions.
Currently, we can have following two cases:
query aligned as 100M600D100M somewhere in the reference. Then coverage values for the big deletion in the middle are missing. (reference region is not covered by query)
query aligned as 100M599D100M somewhere in the reference. Then coverage values for the big deletion in the middle are present (reference region is covered by query).
The threshold of 600 deletions is sort of arbitrary.
We would like to develop a better decision procedure on what to report as "coverage".
Possibly, one that looks into the individual reads (from fastq files) in order to see whether it was the reads that spanned the big deletion, or whether the query is two separate consensus sequences "stitched" together.
The text was updated successfully, but these errors were encountered:
Decide what to do with coverage reporting in presence of large deletions.
Currently, we can have following two cases:
100M600D100M
somewhere in the reference. Then coverage values for the big deletion in the middle are missing. (reference region is not covered by query)100M599D100M
somewhere in the reference. Then coverage values for the big deletion in the middle are present (reference region is covered by query).The threshold of 600 deletions is sort of arbitrary.
We would like to develop a better decision procedure on what to report as "coverage".
Possibly, one that looks into the individual reads (from fastq files) in order to see whether it was the reads that spanned the big deletion, or whether the query is two separate consensus sequences "stitched" together.
The text was updated successfully, but these errors were encountered: