You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While examining the performance of detaxizer on large datasets of 50-100GB in size, I noticed the nextflow process RENAME_FASTQ_HEADERS_PRE using more than 100GB of RAM and repeatedly terminating the pipeline. I see that a dictionary of renamed read headers grows unbounded during renaming, leading to failure with large input files.
In order to use Detaxizer on a machine with 128GB of RAM, I have implemented buffered writing to bound the growth of the renaming dict, and offer my changes as a PR: #62
Process RENAME_FASTQ_HEADERS_PRE is also the most time consuming process in the workflow due to the use of Biopython's relatively slow FASTQ parser. I have replaced this with the >10x faster dnaio parser. I've also accordingly changed this process label to process_low, allowing more renaming processes to run simultaneously for given resource.
The text was updated successfully, but these errors were encountered:
bede
changed the title
Read renaming process loads all read headers input files into memory simultaneously
Read renaming process loads all read headers into memory simultaneously
May 15, 2025
Uh oh!
There was an error while loading. Please reload this page.
While examining the performance of detaxizer on large datasets of 50-100GB in size, I noticed the nextflow process
RENAME_FASTQ_HEADERS_PRE
using more than 100GB of RAM and repeatedly terminating the pipeline. I see that a dictionary of renamed read headers grows unbounded during renaming, leading to failure with large input files.In order to use Detaxizer on a machine with 128GB of RAM, I have implemented buffered writing to bound the growth of the renaming dict, and offer my changes as a PR: #62
Process
RENAME_FASTQ_HEADERS_PRE
is also the most time consuming process in the workflow due to the use of Biopython's relatively slow FASTQ parser. I have replaced this with the >10x faster dnaio parser. I've also accordingly changed this process label toprocess_low,
allowing more renaming processes to run simultaneously for given resource.The text was updated successfully, but these errors were encountered: