(no title)
austinkhale | 1 year ago
The arguments are essentially:
1. The technology has plateaued, not in reality, but in the perception of the average layperson over the last two years.
2. Sam _only_ has a record as a deal maker, not a physicist.
3. AI can sometimes do bad things & utilizes a lot of energy.
I normally really enjoy the Atlantic since their writers at least try to include context & nuance. This piece does neither.
BearOso|1 year ago
It's like fossil fuels. They took billions of years to create and centuries to consume. We can't just create more.
Another problem is that the data sets are becoming contaminated, creating a reinforcement cycle that makes LLMs trained on more recent data worse.
My thoughts are that it won't get any better with this method of just brute-forcing data into a model like everyone's been doing. There needs to be some significant scientific innovations. But all anybody is doing is throwing money at copying the major players and applying some distinguishing flavor.
theptip|1 year ago
Progress on benchmarks continues to improve (see GPT-o1).
The claim that there is nothing left to train on is objectively false. The big guys are building synthetic training sets, moving to multimodal, and are not worried about running out of data.
o1 shows that you can also throw more inference compute at problems to improve performance, so it gives another dimension to scale models on.
senko|1 year ago
Imagine not going to school and instead learning everything from random blog posts or reddit comments. You could do it if you read a lot, but it's clearly suboptimal.
That's why OpenAI, and probably every other serious AI company, is investing huge amounts in generating (proprietary) datasets.
askafriend|1 year ago
_cs2017_|1 year ago
croes|1 year ago
Our problem isn't technology, it's humans.
Unless he suggests mass indoctrination per AI AI won't fix anything.