(no title)
symfrog | 1 year ago
LLMs for replacing work in its entirety seems to be a stretch of the imagination at this point, unless an academic breakthrough that goes beyond the current approach is discovered, which typically has an unknown timeline.
I just don't see how companies like Anthropic/OpenAI are drawing these conclusions given the current state.
ben_w|1 year ago
But the… ah, this is ironic, the anthropic principle applies here:
> From our work, LLMs seem to be nowhere close to being able to independently solve end-to-end business processes
If there was an AI which could do that, your job would no longer exist. Just as with other professions before yours — weavers, potters, computers: https://en.wikipedia.org/wiki/Computer_(occupation) — and there are people complaining that even current LLMs and diffusion models forced them to change career.
> I just don't see how companies like Anthropic/OpenAI are drawing these conclusions given the current state.
If you look at the current public models, you are correct. They're not looking at the current public models.
Look at what people say on this very site — complaining that models have been "lobotomised" (I dislike this analogy, but whatever) "in the name of safety" — and ask yourself: what could these models do before public release?
Look at how long the gap was between the initial GPT-4 training and the completion of the red-teaming and other safety work, and ask yourself what new thing they know about that isn't public knowledge yet.
But also take what you know now about publicly available AI in June 2024, and ask yourself how far back in time you'd have to go for this to seem like unachievable SciFi nonsense — 3 years sounds about right…
… but also, there's no guarantee that we get any particular schedule for improvements, even if it wasn't for most of the top AI researchers signing open letters saying "we want to agree to slow down capabilities research and focus on safety". The AI that can take your job, that can "independently solve end-to-end business processes" may be 20 years away, or it may already exist and be kept under NDA because the creators can't separate good business from evil ones any more than cryptographers can separate good secrets from evil ones.
tivert|1 year ago
> Look at what people say on this very site — complaining that models have been "lobotomised" (I dislike this analogy, but whatever) "in the name of safety" — and ask yourself: what could these models do before public release?
Give politically incorrect answers and cause other kinds of PR problems?
I don't think it's reasonable to take "lobotomised" to mean the models had more general capability before their "lobotomization," which you seem to be implying.
detourdog|1 year ago
camillomiller|1 year ago
politelemon|1 year ago
joak|1 year ago
More training, more data, more parameters, more compute power... and voilà.
Hard to say... but we've been surprised more than once in machine learning history.
JasonBee|1 year ago
I don’t think I’m being hyperbolic to say this is a really dangerous trend.
Science and expertise carried these people to their current positions, and then they throw it all away for a cult of personality as if their personal whims manifested everything their engineers built.