It's really a bummer to see this marketed as 'AI Discovers Something New'. The authors in the actual paper carried out an enormous amount of work, the vast majority of which is relatively standard biochemistry and cell biology - nothing to do with computational techniques. The AlphaFold3 analysis (the AI contribution) literally accounts for a few panels in a supplementary figure - it didn't even help guide their choice of small molecule inhibitors since those were already known. AlphaFold (among other related tools) is absolutely a game changer in structural biology and biophysics, but this is a pretty severe case of AI hype overshadowing the real value of the work.
> The AlphaFold3 analysis (the AI contribution) literally accounts for a few panels in a supplementary figure - it didn't even help guide their choice of small molecule inhibitors since those were already known.
(Disclaimer: I'm the author of a competing approach)
Historically it's "superstar researcher discovers something new" where the superstar researcher actually relies on the research of hordes of grad students and postdocs.
Yes I do agree that much of the work was done using conventional methods and quite little was done with AI. AI model did do the folding though which was IMO critical to understand the structure and see the secondary substructure.
The title is clickbaity, it would be useful to stress that AI solves a very specific problem here that is extremely hard to do otherwise. It is like a lego piece.
It's helpful when reading these kinds of things to realize what you're reading. This isn't research. It's a press release. The author lists himself as a "Public Information Officer" for UC San Diego. Looking back through his article archives, it appears most, if not all, of the press releases make heavy emphasis of technology used by the research rather than anything about the research itself.
Was going to say about the same thing. I have some background in biomedical research a while ago, and I could tell that on the high level the main body of the work here is similar to the methodology used in tons of research that were already done many years ago. People have already been using various machine learning/deep learning methods for a long time, and this is definitely not something significant that the headline tries to make or how people are perceiving it. Not to discount their work, but really, not too much to see for the average reader on the Internet.
In other words, this is something that happens in the field all the time, most of which don't get any attention from people outside the field, were it not because of the "AI" buzzword in the article.
I think the authors of this article probably sought to highlight the fact that AI is now being used in medical research, rather than credit it with all the work (see "helps unravel" as opposed to "unravels").
Press releases like this are published for the purposes of securing funding. Medical research departments at universities are currently under siege by the federal government. Emphasizing the use of AI is a great way to avoid Elon Musk's search, replace and destroy operation for research funding.
Honestly, the fact that the core discovery still relied so heavily on classic biochemistry and experimental validation actually makes it even more impressive to me
It’s “AI helps unravel”, not “AI discovers”. And it’s newsworthy, as AI-assisted discoveries are not yet boringly well-known.
I think it’s cool to see, and a good counterpoint to the “AI can’t do anything except generate slop” negativity that seems surprisingly common round here.
> The authors in the actual paper carried out an enormous amount of work, the vast majority of which is relatively standard biochemistry and cell biology - nothing to do with computational techniques.
OK but if the AI did all the non-standard work, then that's even more impressive, no?
> With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>
Reminds me of: if you come across a dataset you have no idea of what it is representing, graph it.
Whenever I see the term "AI" or similar, I mentally substitute the phrase "a lot of math, done very quickly", which is more concrete, and typically helps me sort out the stuff that still seems plausible, as in the sentence you quoted.
Tying this to APOE, specifically e4 which has an increased requirement for choline and when choline levels are low there can be a metabolic push that leads to elevated PHGDH activity and consequently, increased serine synthesis. That is a neat connection and maybe why when we study choline supplements we see positive results.
That is super interesting, as is the relationship between choline and sleep. With restorative sleep function, and specifically slow-wave activity, considered to be a significant driver of AD.
> In conclusion, our findings suggest that moderate dietary choline intake, ranging from 332.89 mg/d to 353.93 mg/d, is associated with lower odds of dementia and better cognitive performance.
Gemini tells me that amounts to ~850mg of alpha GPC or ~1900mg of citicoline. Eggs it is then.
I always believed that the AI/LLM/ML hysteria is misapplied to software engineering... it just happens to be a field adjacent to it, but not one that can very well apply it.
Medicine and Law, OTOH, suffers heavily from a fractal volume of data and a dearth of experts who can deal with the tedium of applying an expert eye to this much data. Imagine we start capturing ultrasound and chest xrays en masse, or giving legal advice for those who needs help. LLMs/ML are more likely to get this right, than writing computer code.
Somehow, LLMs always seem to be "more likely to get this right" for fields other than one's own (I suppose, this being HN). The term "Andy Grove Fallacy" coined by Derek Lowe (whose articles are frequently posted here, the term being referenced in a recent piece[1]) comes to mind...
(My spouse was an ultrasound tech for many years.)
The problem with an example like ultrasound is that it's not a passive modality - you don't just take a sweep and then analyze it later. The tech is taking shots and adjusting as they go along to see things better in real time. There's all sorts of potential stuff in way often bowel and bones and you have to work around all that to see what you need to.
A lot of the job is actively analyzing what you're seeing while you're scanning and then going for better shots of the things you see, and having the experience and expertise to get the shots is the same skills required to analyze the images to know what shots to get. It's not just a matter of waving a wand around and then having the rad look at it later.
When AI writes nonsensical code, it's a problem, but not a huge one. But when ChatGPT hallucinates while giving you legal/medical advice, there are tangible, severe consequences.
Unless there's going to be a huge reduction in hallucinations, I absolutely don't see LLMs replacing doctors or lawyers.
100% agree ‘chat bots’ will not be a revolutionary technology, but other uses of the underlying technology will be. General robotics, pharmaceuticals, new matter… and eventually 1st line medicines and law sure, but I sure don’t want doctors to vibe diagnose me, or lawmakers to vibe legislate.
LLMs have a history of fabricating laws and precedents when acting as a lawyer. Any advice from the LLM would likely be worse than just assuming something sensible, as that is more likely to reflect what the law is than what the LLM hallucinates it to be. Medicine is in many ways similar.
As for your suggestion to be capture and analyze ultrasounds and X-rays en-mass, that would be malpractice even if it were performed by an actual Doctor instead of an AI. We don't know the base rate of many benign conditions, except that they are always higher than we expect. The additional images are highly likely to show conditions that could be either benign or dangerous, and additional procedures (such as biopsies) would be needed to determine which it is. This would create additional anxiety in patients from the possible diagnosis and further pain and possible complications from the additional procedures.
While you could argue for taking these images and not acting on them, you would either tell the patients the results and leave them worried about what the discovered masses are (so they likely will have the procedures anyway) or you won't tell them (which has ethical implications). Good luck getting that past the institutional review board.
Its good to see them classifying this as for "late onset Alzheimer's".
There is a theory that Alzheimer's as we currently understand it, is not one disease, but multiple diseases that are lumped into one category because we don't have an adequate test.
This is also where some of the controversy surrounding the Amyloid hypothesis comes from.
This is a strong argument for universal healthcare. If we had universal healthcare in the USA, we'd have to have a common charting protocol and a medical chart exchange.
One thing that AI/ML is really good at is taking very large datasets and finding correlations that you wouldn't otherwise. If everyone's medical chart were in one place, you could find things like "four years before presenting symptoms of pancreatic cancer, patients complain of increased nosebleeds", or things like that.
Of course we don't need universal healthcare to have a chart exchange, and the privacy issues are certainly something that needs consideration.
But the point is, I suspect we could find cures and leading indicators for a lot of diseases if everyone's medical records were available for analysis.
Because there's AI as in "letting ChatGPT do the hard bits of programming or writing for me", for which it is woefully unsuited, and there's AI as in using machine learning as a statistical approach, which it fundamentally is. It's something you can pour data into and let the machine find how the data clump together, so you can investigate potential causative relationships the Mark I eyeball might have missed.
I'm excited for the possibilities these uses of AI might bring.
I wonder if they used the output of alpha fold? Remember that Deepmind published the 3D structure of hundreds of millions of proteins for FREE. Imagine if they walled off that data behind an Elsevier like subscription wall? They shoould credit Deep Mind at least
This article is trashy trash trash. The only mention of AI in the actual paper is that they used ChatGPT for grammar correction. The article doesn't explain what or how AI was used beyond "three dimensional modeling".
A paper author did quote the use of AI. But without explaining precisely how AI was used and why it was valuable this article is basically clickbait trash. Was AI necessary for their key result? If so how and why? We don't know!
Everything about this screams "just say AI and we'll get more attention".
That's not true, the paper used AlphaFold 3. The disclaimer is about generative AI, not AI writ large.
I agree the UCSD writeup is pretty misleading; the authors used protein-modeling software, which is really not very interesting, and the fact that the SOTA protein modeler uses machine learning is not at all relevant to this specific paper.
This piece of the puzzle, and its finding, if confirmed, is very neat. But I think we are barking at the wrong tree, because senescence is inherently chaotic. Sometimes we identify a disease with a set of common symptoms because there are many alternative causes that lead to those very symptoms. It's like "convergent symptoms", so to speak.
If I had any funding to work freely in these subjects, I would instead focus on the more fundamental questions of computationally mapping and reversing cellular senescence, starting with something tiny and trivial (but perhaps not tiny nor trivial enough) like a rotifer. My focus wouldn't be the biologists' "we want to understand this rotifer", "or we want to understand senescence", but more "can we create an exact computational framework to map senescence, a framework which can be extended and applied to other organisms"?
Sadly, funding for science is a lost cause, because even where/when it is available, it comes with all sort of political and ideological chains.
If AI causes us humans to workout our brains less, maybe it is also causing Alzheimer's. In the words of Homer Simpson: "To alcohol! The cause of--and solution to--all life's problems."
I notice that I have a form of Gell-Mann amnesia for this sort of thing. Do we need a new term, or does that cover it?
Because I find myself nodding along with optimism, having two grandfathers that died from this disease. It’d be great if something could sift through all the data and come up with a novel solution.
Then I remember that this is the same technology that eagerly tries to autocomplete every other line of my code to include two nonexistent variables and a nonexistent function.
I hope this field has some good people to sanity check this stuff.
[+] [-] avogt27|10 months ago|reply
[+] [-] bilekas|10 months ago|reply
[+] [-] mk89|10 months ago|reply
I just read some days ago here on HN an interesting link which shows that more than 70% of VC funding goes straight to "AI" related products.
This thing is affecting all of us one way or another...
[+] [-] trott|10 months ago|reply
(Disclaimer: I'm the author of a competing approach)
Searching for new small-molecule inhibitors requires going through millions of novel compounds. But AlphaFold3 was evaluated on a dataset that tends to be repetitive: https://olegtrott.substack.com/p/are-alphafolds-new-results-...
[+] [-] api|10 months ago|reply
[+] [-] sublimefire|10 months ago|reply
The title is clickbaity, it would be useful to stress that AI solves a very specific problem here that is extremely hard to do otherwise. It is like a lego piece.
[+] [-] nonameiguess|10 months ago|reply
Go the current very last page and he's hyping up nanotech in 2015, which as far as I'm aware, didn't end up panning out or really going anywhere. https://today.ucsd.edu/archives/author/Liezel_Labios/P260
[+] [-] rs186|10 months ago|reply
In other words, this is something that happens in the field all the time, most of which don't get any attention from people outside the field, were it not because of the "AI" buzzword in the article.
[+] [-] discodonkey|10 months ago|reply
[+] [-] AdventureMouse|10 months ago|reply
How many people would have read the article if it didn’t mention AI?
[+] [-] mbgerring|10 months ago|reply
[+] [-] yieldcrv|10 months ago|reply
> *These authors contributed equally
so your position is satisfied by listing an AI amongst those authors
[+] [-] 0x70run|10 months ago|reply
[+] [-] SwtCyber|10 months ago|reply
[+] [-] jamesrcole|10 months ago|reply
> It's really a bummer to see this marketed as 'AI Discovers Something New'.
The headline doesn't suggest that. It's "AI Helps Unravel", and that seems a fair and accurate claim.
And that's true for the body of the article, too.
[+] [-] theptip|10 months ago|reply
I think it’s cool to see, and a good counterpoint to the “AI can’t do anything except generate slop” negativity that seems surprisingly common round here.
[+] [-] bGl2YW5j|10 months ago|reply
[+] [-] amelius|10 months ago|reply
OK but if the AI did all the non-standard work, then that's even more impressive, no?
[+] [-] rad_gruchalski|10 months ago|reply
> With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>
Reminds me of: if you come across a dataset you have no idea of what it is representing, graph it.
[+] [-] kylehotchkiss|10 months ago|reply
[+] [-] davidrupp|10 months ago|reply
[+] [-] mobilejdral|10 months ago|reply
https://www.sciencedirect.com/science/article/pii/S000291652...
[+] [-] pedalpete|10 months ago|reply
https://www.jarlife.net/3844-choline-sleep-disturbances-and-...
[+] [-] xlbuttplug2|10 months ago|reply
Gemini tells me that amounts to ~850mg of alpha GPC or ~1900mg of citicoline. Eggs it is then.
[+] [-] j45|10 months ago|reply
[+] [-] cdf|10 months ago|reply
Medicine and Law, OTOH, suffers heavily from a fractal volume of data and a dearth of experts who can deal with the tedium of applying an expert eye to this much data. Imagine we start capturing ultrasound and chest xrays en masse, or giving legal advice for those who needs help. LLMs/ML are more likely to get this right, than writing computer code.
[+] [-] merksittich|10 months ago|reply
[1] https://www.science.org/content/blog-post/end-disease
[+] [-] SketchySeaBeast|10 months ago|reply
The problem with an example like ultrasound is that it's not a passive modality - you don't just take a sweep and then analyze it later. The tech is taking shots and adjusting as they go along to see things better in real time. There's all sorts of potential stuff in way often bowel and bones and you have to work around all that to see what you need to.
A lot of the job is actively analyzing what you're seeing while you're scanning and then going for better shots of the things you see, and having the experience and expertise to get the shots is the same skills required to analyze the images to know what shots to get. It's not just a matter of waving a wand around and then having the rad look at it later.
[+] [-] discodonkey|10 months ago|reply
Unless there's going to be a huge reduction in hallucinations, I absolutely don't see LLMs replacing doctors or lawyers.
[+] [-] hermitShell|10 months ago|reply
[+] [-] IX-103|10 months ago|reply
That would be actual malpractice in either case.
LLMs have a history of fabricating laws and precedents when acting as a lawyer. Any advice from the LLM would likely be worse than just assuming something sensible, as that is more likely to reflect what the law is than what the LLM hallucinates it to be. Medicine is in many ways similar.
As for your suggestion to be capture and analyze ultrasounds and X-rays en-mass, that would be malpractice even if it were performed by an actual Doctor instead of an AI. We don't know the base rate of many benign conditions, except that they are always higher than we expect. The additional images are highly likely to show conditions that could be either benign or dangerous, and additional procedures (such as biopsies) would be needed to determine which it is. This would create additional anxiety in patients from the possible diagnosis and further pain and possible complications from the additional procedures.
While you could argue for taking these images and not acting on them, you would either tell the patients the results and leave them worried about what the discovered masses are (so they likely will have the procedures anyway) or you won't tell them (which has ethical implications). Good luck getting that past the institutional review board.
[+] [-] chairhairair|10 months ago|reply
[+] [-] kjkjadksj|10 months ago|reply
[+] [-] pedalpete|10 months ago|reply
There is a theory that Alzheimer's as we currently understand it, is not one disease, but multiple diseases that are lumped into one category because we don't have an adequate test.
This is also where some of the controversy surrounding the Amyloid hypothesis comes from.
[+] [-] jedberg|10 months ago|reply
One thing that AI/ML is really good at is taking very large datasets and finding correlations that you wouldn't otherwise. If everyone's medical chart were in one place, you could find things like "four years before presenting symptoms of pancreatic cancer, patients complain of increased nosebleeds", or things like that.
Of course we don't need universal healthcare to have a chart exchange, and the privacy issues are certainly something that needs consideration.
But the point is, I suspect we could find cures and leading indicators for a lot of diseases if everyone's medical records were available for analysis.
[+] [-] xyst|10 months ago|reply
"AI" in this case was used to generate a 3D model of a protein. Literally, something you can grab from Wikipedia — https://en.m.wikipedia.org/wiki/Phosphoglycerate_dehydrogena...
The underlying work performed by the researchers is much more interesting — https://linkinghub.elsevier.com/retrieve/pii/S00928674250039...
They identified a possible upstream pathway that could help treat disease and build therapeutic treatments for Alzheimer’s.
I don’t know about you all but I’m tired of the AI-mania. At least author didn’t but "blockchain" in the article.
[+] [-] insin|10 months ago|reply
[+] [-] bitwize|10 months ago|reply
Because there's AI as in "letting ChatGPT do the hard bits of programming or writing for me", for which it is woefully unsuited, and there's AI as in using machine learning as a statistical approach, which it fundamentally is. It's something you can pour data into and let the machine find how the data clump together, so you can investigate potential causative relationships the Mark I eyeball might have missed.
I'm excited for the possibilities these uses of AI might bring.
[+] [-] bawana|10 months ago|reply
[+] [-] psyclobe|10 months ago|reply
[+] [-] forrestthewoods|10 months ago|reply
A paper author did quote the use of AI. But without explaining precisely how AI was used and why it was valuable this article is basically clickbait trash. Was AI necessary for their key result? If so how and why? We don't know!
Everything about this screams "just say AI and we'll get more attention".
[+] [-] AIPedant|10 months ago|reply
I agree the UCSD writeup is pretty misleading; the authors used protein-modeling software, which is really not very interesting, and the fact that the SOTA protein modeler uses machine learning is not at all relevant to this specific paper.
[+] [-] yapyap|10 months ago|reply
[+] [-] dsign|10 months ago|reply
If I had any funding to work freely in these subjects, I would instead focus on the more fundamental questions of computationally mapping and reversing cellular senescence, starting with something tiny and trivial (but perhaps not tiny nor trivial enough) like a rotifer. My focus wouldn't be the biologists' "we want to understand this rotifer", "or we want to understand senescence", but more "can we create an exact computational framework to map senescence, a framework which can be extended and applied to other organisms"?
Sadly, funding for science is a lost cause, because even where/when it is available, it comes with all sort of political and ideological chains.
[+] [-] netdevphoenix|10 months ago|reply
[+] [-] dudeinjapan|10 months ago|reply
https://www.youtube.com/watch?v=SXyrYMxa-VI
[+] [-] devmor|10 months ago|reply
It’s a nice reprieve from “we’re using a chatbot as a therapist and it started telling people to kill themselves” type news.
[+] [-] SwtCyber|10 months ago|reply
[+] [-] jasonkester|10 months ago|reply
Because I find myself nodding along with optimism, having two grandfathers that died from this disease. It’d be great if something could sift through all the data and come up with a novel solution.
Then I remember that this is the same technology that eagerly tries to autocomplete every other line of my code to include two nonexistent variables and a nonexistent function.
I hope this field has some good people to sanity check this stuff.
[+] [-] maggiepatells42|10 months ago|reply
[deleted]