top | item 10917391

The Heroes of CRISPR

95 points| jseliger | 10 years ago |cell.com | reply

35 comments

order
[+] a_bonobo|10 years ago|reply
Comments on PubPeer are very negative about the undisclosed conflict of interest of OP's author:

https://pubpeer.com/publications/D400145518C0A557E9A79F7BB20...

[+] geyang|10 years ago|reply
Thanks for this reference!
[+] nsns|10 years ago|reply
Actually, for a layman, it's really nice to see a survey of the field accompanied, as it is here, by its underlying real-life politics, economics and other human factors; not something easily accessible normally.
[+] mortenjorck|10 years ago|reply

  Finally, the narrative underscores that scientific 
  breakthroughs are rarely eureka moments. They are typically 
  ensemble acts, played out over a decade or more, in which 
  the cast becomes part of something greater than what any 
  one of them could do alone.
An excellent closing paragraph. Even if the majority of the technical concepts in this paper were beyond my grasp, I did take this much away from it.
[+] perugolate|10 years ago|reply
This is as partisan as inviting emperor palpatine to write a historical perspective to fill in the back story of the fall of the republic and the rise of the empire.
[+] jallmann|10 years ago|reply
Really great article, especially for someone like me who only has a layman's understanding of the biology involved. Thinking out loud:

In computer science terms, the method scientists use to discover and characterize genes (CRISPR, etc) seems akin to studying the assembly of a program, occasionally splicing chunks of assembly into other programs, and observing changes in the output after running. This sounds like a huge slog, and it is amazing that the method works.

One interesting thing from the article was that, even after determining the function of CRISPR, scientists still had difficulty in understanding the mechanical/chemical means of its behavior: the gene is a black box, and its expression can only be deduced when observing its effects after running.

Have there been attempts to characterize genes on a more basic level, eg by modelling how a given nucleotide sequence encodes a protein, and deducing its function from there? Is our understanding of protein folding still inadequate? Or would reconstructing a protein's physical structure still not give us enough insight into its intended function?

[+] reasonattlm|10 years ago|reply
Evolution produces promiscuous reuse of component parts, and everything is linked to everything else: a cell is a bag of interacting feedback loops in solution. At a detail level, researchers still only have a sketch of the high points of cellular biochemistry. Any particular protein may have numerous roles, and scientists continue to uncover new important roles for even very well known proteins, those studied heavily for decades in some cases.

Modeling is prevalent and helps. Deducing function with any accuracy requires much better and more comprehensive models of cellular biochemistry, however. Those models lie decades of work from here, meaning that altering genes and watching the outcome in mammalian cells and individuals will be the primary mode of exploration and validation for a while yet.

[+] noname123|10 years ago|reply
>This sounds like a huge slog, and it is amazing that the method works.

Thanks to programming, a lot of this is automated.

In Biology, you can use programmable robotics arm that perform thousands of experiments (high-throughput screening). So for CRISPR, suppose you want to test the thousand genetic variation identified previously computationally for a particular cancer; you can take a cancer cell-line, apply gene-editing targeted at each of those sites each in a slot in a 96 plate-well and observe if the cancer cells' tumor growth. These 96 plate-wells are then fed into image analysis program to quantify the tumor growth.

>Is our understanding of protein folding still inadequate? Or would reconstructing a protein's physical structure still not give us enough insight into its intended function?

The field that simulate protein-to-protein simulation is called MD (molecular dynamics) simulation. The issue is the simulation is really complex. So you start at the molecular level, whether one protein can attach to another protein's end like Lego's, then you have to account for the individual cellular level, called cell circuits (modeling an individual cell like a logic circuit, different genes producing proteins that regulate one another), then you have to account for the multicellular interactions (how cells influence another).

Instead of trying to simulate everything, you have people comparing and studying things at different level.

[+] dekhn|10 years ago|reply
FOrward prediction of protein function from its physical structure is still considered impossibly hard if done de novo- IE, assuming no external knowledge. In principle, one could simulate proteins and extract this kind of information, but it's not computationally accessible.

Folding the protein is only one part of it. This part, while not "solved", has seen an enormous amount of progress in the past few years, such that we can often predict the coarse-grained 3D structure of a protein, although not the fine details in most cases.

Understanding the enzymatic action of proteins typically requires simulating chemistry at the quantum (bonds breaking and forming) level around the active site.

Since biologists are more interested in getting results quickly, rather than solving the fundamentally hard problems (the former gets grant money more easily), and because structures that are similar tend to have similar functions, they tend to use structural homology- similarity to a protein of known function- to infer the function of a protein.

Although much of my prior work was designed to address the forward prediction problem, I have acknowledged that shortcuts produce more valuable data. And often times, that does involve finding a gene whose protein product behavior is cryptic, and then using structural homology, and other indirect methods, to refine the function of the protein.

As for the "computer science terms", I can mention that after working with large distributed systems for a long time, I treat debugging them a lot more like how biologists deal with cryptic proteins than trying to understand them from first principles. I often run "experiments" by injecting things into the distributed systemns, and monitoring them, much like scientists monitor proteins using fluorescence.

[+] sjg007|10 years ago|reply
Yep. It's basically debugging assembly code and also designing black and white box tests to figure out how the code works.
[+] reasonattlm|10 years ago|reply
So it is today technically possible to run a gene therapy in adult humans via CRISPR and some recent innovations in vectors with a good expectation that tissue coverage is going to be what you want it to be - i.e. enough cells take up changes for it to work. (See for example http://today.duke.edu/2015/12/crisprmousedmd for the good tissue coverage angle).

This is going to be next year's medical tourism, analogous to the progression of stem cell therapies from the turn of the century.

There are a few really obvious candidates, such as myostatin and follistatin to greatly increase muscle growth - and at least one person who has had that done already, c.f. BioViva. To do this all you need are the connections. Buy into a biotech startup company, make the arrangement with a lab in Mexico or Thailand and off you go. If you knew the right people, you could be on a plane tomorrow and the beneficiary of elective gene therapy the day afterwards.

It can't be overstated how easy CRISPR makes this. It is easier even than induced pluripotency, and that spread like wildfire through the labs when it emerged.

The more interesting thing to me is that there are probably a hundred less obvious, poorly studied, poorly followed up, but very interesting genetic alterations that could be done, and probably will be done largely outside the institutions of medical research. This is what happens when cost falls. Any single gene is fair game now. Want great resistance to ischemia/reperfusion injuries? Knock out PHD1 [1]. Want your aged liver to function as well as it did when you were young? Add more lysosomal receptors. [2] Want to have permanent operation of the benefits of fasting and exercise in the form of upregulated autophagy? Increase AMPK levels [3]. And so on and so forth through scores of genes. Maybe many of them will work as the studies suggest, maybe not.

[1]: http://www.vib.be/en/news/Pages/-VIB-researchers-discover-po...

[2]: http://www.nytimes.com/2009/10/06/science/06cell.html

[3]: http://www.salk.edu/news-release/how-the-cells-power-station...

[+] ericb|10 years ago|reply
It seems like one thing that CRISPR brings to the table is find and replace, or find and delete.

As a layman, this would seem ideal for treating any sort of cancer... Just find the distinct sequences and delete?

What bad news do I not (yet) know that makes it much tougher than I'm hoping?

[+] a_bonobo|10 years ago|reply
Sadly, cancer goes way beyond some broken DNA, in cancer cells you have polyploidy = several chromosome copies, these copies break up in weird ways so that some chromosomes are joined to others and other random mixing, you can't fix that using CRISPR

You could, however, identify regions of DNA which make your cells more susceptible to the transformation, but CRISPR is still more unstable than the OP makes you think - in the highly debated human CRISPR experiment (using an older protocol) the scientists started with 86 embryos, of which 71 survived, of which only "a fraction" was successfully transformed, and in another fraction of that the proper target was hit but only in mosaics (with non-transformed cells remaining), in the remaining successful transformed cells the new DNA was inserted in the wrong position (potentially creating a new disease). See Carl Zimmer's take here: http://phenomena.nationalgeographic.com/2015/04/22/editing-h... There's a lot of hype which IMHO damages the technology, it's much more finicky than it's usually presented

[+] Q6T46nT668w6i3m|10 years ago|reply
Re: bad news --- a "find and replace" feature where unmatched tokens were replaced with unusual values.
[+] peter303|10 years ago|reply
I guess with all these contributions, where do you draw the line for the inevitable Nobel Prize? When these structures were first discovered? Or when some could use them for genevediting?
[+] skosuri|10 years ago|reply
Possibly both. Either way, tons of people always get left out.
[+] Gatsky|10 years ago|reply
First of all, why is a Lander writing this? What qualifies him to reconstruct the history of CRISPR? He doesn't do CRISPR research. Historical reviews are always written by the people that actually did the research because A) who else really knows what happened and B) we are interested in their perspective. So Lander's qualification is what exactly? That he's an important person who knows what DNA is?

The article itself also absolutely stinks. First an ad hoc justification why this article is important consisting of nothing more than platitudes about science, then evocative descriptions of the weather in Santa Pola, progressively more brief descriptions of other sicentists' work followed by a long section at the end which could be a slightly edited version of the closing remarks from the Broad Institute's patent attorney in their dispute against the other CRISPR developers.

This in particular, stinks:

"The history also illustrates the growing role in biology of “hypothesis-free” discovery based on big data. The discovery of the CRISPR loci, their biological function, and the tracrRNA all emerged not from wet-bench experiments but from open-ended bioinformatic exploration of large-scale, often public, genomic datasets. “Hypothesis-driven” science of course remains essential, but the 21st century will see an increasing partnership between these two approaches."

What 'growing role'? Biology has always been largely hypothesis-free. Which hypothesis was being tested when DNA was discovered? They were trying to fit a model to the data, not testing out their ideas about how DNA should look. Penicillin was discovered by accident, where was the hypothesis there? And the importance of bioinformatic exploration? How does this: "Using his word processor, Mojica painstakingly extracted each spacer and inserted it into the BLAST program to search for similarity with any other known DNA sequence" support the importance of bioinformatics? Manually pasting 4500 sequences into BLAST? Surely this is an example of progress DESPITE any substantial bioinformatics.

I can't see why this was written except to further the agenda of the author and his institute, and to color history to support their legal claims. I can't see how this was published except through nepotism, eminence based science and the kind of shameful arrogance and entitlement that only academics have the luxury to cultivate. Maybe Lander thought he was setting the record straight, but that isn't his privilege.

[+] skosuri|10 years ago|reply
Wow, This is a lot of anger. I for one appreciated the viewpoint (taken with an eye that it is his view). Anyways, for what it's worth, Lander's lab has done a bunch with CRISPR's so your initial premise is flat out wrong [eg., http://www.ncbi.nlm.nih.gov/m/pubmed/24336569/ ]. Second, I remember when they figured out that they were targeting phage. I was working with some Cyanobacteria that had a bunch of crispr loci as well. I do remember distinctly it being one of the first times that a whole new bio came from large part from informatics. (Another being lncRNA ultra conserved elements). Anyways, I really don't get the viciousness of this response.
[+] noname123|10 years ago|reply
>Biology has always been largely hypothesis-free.

I think Dr.Lander is talking about having open infrastructures and databases cataloging human genome variations; that allow everyone explore the datasets to find new patterns.

You are right searching through BLAST is easy. However, maintaining and updating entries into the NCBI BLAST databases is not. Genomes are sequenced, assembled and submitted to NCBI every day. Cataloguing the variations in different strains of the different species, and identifying real novel evolutionary variations compared to just noises to existing reference genomes is not an easy task (1000 Genomes, dbSNP, malaria genome network, etc.)

It is analogous to saying, Googling is easy but building and maintaining a search engine is not. I get what you are saying that some important discoveries are made by accidents, but the open data and intuitive platforms need to be there for the inspirational late night "Showerthought" googling-that-leads to-insight to happen in the first place.

[+] Moshe_Silnorin|10 years ago|reply
The patent thicket surrounding it is looking quite daunting, which I find mildly enraging. However, I have mixed feeling on biotechnology. Weaponized biotechnology looks to be getting disturbingly cheap; maybe these absurd government-granted monopolies will do some good in slowing this down.
[+] api|10 years ago|reply
Terrorists aren't going to balk at patent infringement.
[+] onetimePete|10 years ago|reply
We weakend encryption to get to the terrorists- and now they got there own.