top | item 41829012

(no title)

daemonk | 1 year ago

Productionizing and scaling nanopore sequencing is definitely an achievement. With that operational obstacle out of the way, you get to think about sequencing in terms of “streams” of data rather than “batches” of huge amounts of data (illumina). That confers a huge operational and commercial benefit.

discuss

order

kjkjadksj|1 year ago

Can you explain what you mean by streams? My familiarity with long read data is that you still deal with batches of huge amounts of data at the end of the day. Just the sequencing read is of course longer than 150bp paired end reads.

daemonk|1 year ago

Nanopore spits out data as dna goes through the pores. And depending on the flowcell you are using, you can load or top up the flow cell in smaller amounts. So you can potentially have quick turn around time without having to wait for enough samples to pile up before you batch sequence on a larger illumina machine.

Illumina machines are cost efficient in terms of cost per basepair, but only at large batches. They are trying to rectify this after seeing other benchtop sequencing machines (Element Biosci) moving into the mid-throughout niche and doing well. Their solution is the miseq i100 that they just announced.

But at the end of the day, these are all still constrained by having to think in terms of multiplexed batches which has a lot of operational complexity involved (equi-molar pooling, barcoding, etc).

Ultimately, for commercial sequencing labs, one of the more difficult problem to solve is the operational complexity of how to optimally load the sequencer for lowest cost while balancing failure rates/low coverage rates rather than the technicalities of dna prep/lib prep. Given unlimited and consistent intake samples, the problem gets easier. But most labs have some kind of seasonality or project cycle built in which means it’s not about maximizing a yearly capacity, it’s more about how many samples max you can pump through within a few days.