(no title)
tossandthrow | 5 days ago
Yes, Ai is still not good in the grand scheme of things. But everybody actively using it has gotten concerned over the past 2 months by the leap frigging of LLMs - and surprised as they thought we had arrived at the plateau.
We will see in a year or two if humans still hold an advantage in research - currently very few do in software development, despite what they think about themselves.
lioeters|5 days ago
The other side of the coin is: automating science as a machine activity.
Is that what we want? I agree with you that the use of language models in science is an inevitable paradigm shift, but now is the time to make collective decisions about how we're going to assimilate this increasingly super-human "intelligence" into academic practices, and the rest of daily life. Otherwise we will be the ones being assimilated by a force beyond our control.
The progress is so rapid that the only people who might have control over the process are the ones with self-interest, mainly financial, and not aligned with - in some aspects opposed to - the interests of humanity.
tossandthrow|5 days ago
Only if there are some very fundamental and convincing arguments that are still not uncovered.
We can't protect science and let services like medical services be too expensive for people to have access to them.
That would be introducing new social classes: people who do science can get unnecessary protection, everybody else can not.
That is not going to fly.
donkeybeer|5 days ago
pjc50|5 days ago