top | item 39707017

Learning From DNA: a grand challenge in biology

119 points| ninjha01 | 2 years ago |hazyresearch.stanford.edu | reply

26 comments

order
[+] ninjha01|2 years ago|reply
I built the wrapper/playground [0] linked in the article. Feel free to give feedback here or by the email in my bio

[0] https://evo.nitro.bio/

[+] timy2shoes|2 years ago|reply
Hi Nishant. Great work, as always.
[+] jashephe|2 years ago|reply
I'm a little disappointed that their linked preprint doesn't appear to include any molecular biology; i.e. they don't actually try to synthesize any of their predicted sequences and test function. It wouldn't be an outrageous synthesis task to make some of the CRISPR-Cas sequences they generated.

Also interesting that AlphaMisense is omitted from Figure 2B; it substantially outperforms the ESM-based ESM1b in our hands. But I guess the idea is that this is a general-purpose DNA language model whereas AlphaMissense is domain-specific for variant effect prediction?

[+] bnprks|2 years ago|reply
Strong second for wishing they tried physically testing some model output. The importance of "model that makes outputs AlphaFold thinks look like Cas" is very different from "model that makes functional Cas variants".

For design tasks like in this paper, I think computational models have a big hill to climb in order to compete with physical high-throughput screening. Most of the time the goal is to get a small number of hits (<10) out of a pool of millions of candidates. At those levels, you need to work in the >99.9% precision regime to have any hope of finding significant hits after multiple-hypothesis correction. I don't think they showed anything near that accurate in the paper.

Maybe we'll get there eventually, but the high-throughput techniques in molecular biology are also getting better at the same time.

[+] ackbar03|2 years ago|reply
This should really be a requirement for bio type related generative methods rather than a nice-to-have. A very high percentage of compounds generated by genai type methods have been shown not to work as intended. Anything without wetlab validation should really be taken with a large grain of salt
[+] rdmirza|2 years ago|reply
My immediate thought. Big Claims without backing.

Your model makes predictions. Prove they’re worth salt.

[+] d_silin|2 years ago|reply
Would be interesting to see what comes of it.

As you progress along the following chain: genomics-->proteomics->interactomics->metabolomics, our understanding becomes blurrier and challenges harder.

[+] pfisherman|2 years ago|reply
Just gonna leave this here.

https://www.biorxiv.org/content/10.1101/2024.02.29.582810v1

Tl;dr: DNA is NOT all you need.

[+] jhbadger|2 years ago|reply
I think you are missing what the Evo project is trying to do -- create a new prokaryotic genome through a generative model. How this would work would be like the earlier hand-made synthetic genomes like Synthia (Gibson et al, 2010).

In such a system you would take an existing bacterial cell and replace its genome with the newly synthesized version. The proteins and other molecules from the existing cell would remain (before eventually being replaced) and serve to "boot" the new genome.

[+] samuell|2 years ago|reply
I tend to agree (the cell being in control and all the 4D interactions and epigenetics mechanisms etc), but out of curiosity, what would you say we also need?
[+] t_serpico|2 years ago|reply
https://onlinelibrary.wiley.com/doi/10.1002/bies.201300153 tl;dr: metabolism is all you need.

while potentially interesting work, very shortsighted and premature to say this is a "GPT" moment in biology. ML people in bio need to think hard not only about what they are doing, but why are they are doing it (other than this is cool and will lead to a nice Nature publication). Their basic premise (learning from DNA is the next grand challenge in biology) is shaky. Imo, the grand challenge in biology is determining what the grand challenge is, and that is a deep scientific/philosophical question.

[+] dekhn|2 years ago|reply
most of the examples in that paper (a single paper) show that DNA is nearly all you need, with the rest being RNA.
[+] visarga|2 years ago|reply
DNA is all you need? In the future generative AI will generate You!