Burrows-Wheeler transform for terabases

November 03, 2015 ยท Entered Twilight ยท ๐Ÿ› Data Compression Conference

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, Makefile, README.md, bwt.cpp, bwt.h, bwt_convert.cpp, bwt_inspect.cpp, bwt_merge.cpp, fmi.cpp, fmi.h, formats.cpp, formats.h, paper, support.cpp, support.h, utils.cpp, utils.h

Authors Jouni Sirรฉn arXiv ID 1511.00898 Category cs.DS: Data Structures & Algorithms Citations 28 Venue Data Compression Conference Repository https://github.com/jltsiren/bwt-merge โญ 24 Last Checked 1 month ago
Abstract
In order to avoid the reference bias introduced by mapping reads to a reference genome, bioinformaticians are investigating reference-free methods for analyzing sequenced genomes. With large projects sequencing thousands of individuals, this raises the need for tools capable of handling terabases of sequence data. A key method is the Burrows-Wheeler transform (BWT), which is widely used for compressing and indexing reads. We propose a practical algorithm for building the BWT of a large read collection by merging the BWTs of subcollections. With our 2.4 Tbp datasets, the algorithm can merge 600 Gbp/day on a single system, using 30 gigabytes of memory overhead on top of the run-length encoded BWTs.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Data Structures & Algorithms