You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Leon has option "-seq-only" that allows ignoring read names during compression. When decompressing a Leon archive produced with this option, Leon automatically names the decompressed sequences using numbers starting from 0. These numbers go until 50,000, at which point the counter resets to 0. Additionally, an empty sequence is generated at the 50,000 reads boundary. Fragment of the decompressed output:
An empty sequence (with name " 50000") is generated during decompression, although the original file had no such sequence.
Sequence names are not unique. When the data is sufficiently large, the decompressed file will have multiple sequences with each name.
To fix to this problem I suggest removing the artificial upper bound of 50,000 reads.
Also, considering the possibility of huge data, I recommend making sure that the counter can't overflow (e.g., by using arbitrary precision number).
In addition, I would suggest to avoid putting a space between ">" and name, and to start counting with 1 (instead of 0). This will make the output a bit more friendly to downstream tools and to interpretation. But these are less important and can be considered a preference.
The text was updated successfully, but these errors were encountered:
Leon has option "-seq-only" that allows ignoring read names during compression. When decompressing a Leon archive produced with this option, Leon automatically names the decompressed sequences using numbers starting from 0. These numbers go until 50,000, at which point the counter resets to 0. Additionally, an empty sequence is generated at the 50,000 reads boundary. Fragment of the decompressed output:
The two main problems with this counting are:
An empty sequence (with name " 50000") is generated during decompression, although the original file had no such sequence.
Sequence names are not unique. When the data is sufficiently large, the decompressed file will have multiple sequences with each name.
To fix to this problem I suggest removing the artificial upper bound of 50,000 reads.
Also, considering the possibility of huge data, I recommend making sure that the counter can't overflow (e.g., by using arbitrary precision number).
In addition, I would suggest to avoid putting a space between ">" and name, and to start counting with 1 (instead of 0). This will make the output a bit more friendly to downstream tools and to interpretation. But these are less important and can be considered a preference.
The text was updated successfully, but these errors were encountered: