top | item 36072783

(no title)

mcdonji | 2 years ago

I think the effort in the testing of the thousands of drugs was to help create the AI model. Then they used that model and it seems to have identified a promising antibiotic. The next time they go through the process they would not need to train the model again right? So get another big list of chemicals and run them through the model.

discuss

order

geph2021|2 years ago

   I think the effort in the testing of the thousands of drugs was to help create the AI model.
This gets to the crux of my skepticism around the big claims around the pace of AI advancement. At a fundamental level the upper limit of AI advancement, in any area, is "the speed of information". For some areas, like pharmaceutical/drug development, the information comes from the real world, human/biological processes (e.g. clinical drug trials), which take time. At the extreme, the outcomes of interest could be long-term (i.e. years or decades). AI surely advances analytically capabilities, but ultimately models can only be developed or refined with new data/information, which unfolds at a rate that may be independent of computational speeds. AI models that are highly predictive and valuable by definition necessitates a feedback loop that is tied back to real-world outcomes/timescales.

I'm no expert on AI, but I get this sense that the exponential improvements that many believe will lead to the singularity may in fact reach an inflection point where the curve flattens out becomes linear or asymptotic, as the rate of improvement is governed by the rate of new information in the real world.

agentofoblivion|2 years ago

You hit the nail on the head, and I train transformers for a living. This pervasive axiom that intelligence can just scale exponentially at a rapid pace is rarely questioned or even stated as an assumption. It's far from clear that this is possible, and what you've outlined is a plausible alternative.

ajuc|2 years ago

It's possible that no new information is needed, just better analysis.

chrisco255|2 years ago

Even for existing information, there remains an enormous amount of contextual / cultural / insider knowledge about the world that is not documented in any digestible way by an AI.

YeGoblynQueenne|2 years ago

Reading the study's abstract, there will not be a "next time" because the dataset they created, and the model they trained, was specific to Acinetobacter baumannii:

Here we screened ~7,500 molecules for those that inhibited the growth of A. baumannii in vitro. We trained a neural network with this growth inhibition dataset and performed in silico predictions for structurally new molecules with activity against A. baumannii. Through this approach, we discovered abaucin, an antibacterial compound with narrow-spectrum activity against A. baumannii.

https://www.nature.com/articles/s41589-023-01349-8

Not only were both the dataset they created, and the model they trained on it, specific to one organism, the drug they discovered also only works on that one organism ("narrow spectrum activity against A. baumannii"). If they wanted to discover drugs that work on other organisms, like Staphylococcus aureus and Pseudomonas aeruginosa that the BBC article mentions, they'd have to start all over again.

So, not an approach that looks very practical at this time. Maybe in the future, when the sample efficiency and generalisation ability of neural nets has significantly improved it will be useful in practice.

Study:

https://www.nature.com/articles/s41589-023-01349-8

yyyk|2 years ago

>was specific to Acinetobacter baumannii

We can reasonably expect the bacteria to mutate against the new antibiotic if/once it's used. It's one shifty opponent. This may make the model obsolete, but maybe not - there'd cause to try the model. Actually, it would have been preferable to get more than one result at first...

[EDIT: Then again, would they have another candidate list? This model doesn't do toxicology. The second list was created by using existing proven-safe meds. Do they have another couple thousand materials good to go? If not, they won't be able to run a second time. ]

carlmr|2 years ago

Counterpoint, the specialization training seems to have been fast enough to be worthwhile; they might need to find multiple antibiotics for the same organism; a narrow antibiotic might be really good because it doesn't mess with your gut bacteria as much as something that broadly destroys everything.

If we had a model that could predict the next working antibiotic for MRSA that would be amazing. And you'd probably need it multiple times as MRSA keeps evolving new defenses.

Even narrowing down the list of substances to test by 10x it's amazing.

This looks very practical to me.

rowanG077|2 years ago

Reading "in silico" never gets old for me.