top | item 39311560

(no title)

aik | 2 years ago

1. We know they have more big breakthroughs already that have not been released. 2. We know the current tech can keep scaling. They have not hit a limit with the current approach yet.

Given gpt-4 is already ridiculously useful and we’ve barely scratched the surface, it makes complete sense to me. More capacity + faster gpt responses unlocks massive amounts of more potential/use cases.

discuss

order

azinman2|2 years ago

Makes sense to commit 1% of the world’s total money? I don’t think so…

No one is even talking about these kind of figures being used for climate change, which is a far more pressing problem.

HeatrayEnjoyer|2 years ago

Climate change disaster timelines are longer than AI timelines.

Geometric growth doesn't feel like much until you slam into the wall.

__loam|2 years ago

I think you are overestimating its usefulness and underestimating how much the surface has been metaphorically breached.

Where's the killer app? The only one I can think of off hand is co-pilot and the reception I've seen is that it's pretty mid. Most of the proposed applications require human checking to get right which is a huge limitation to the adoption of these systems unless you accept a 3-5% error rate which is terrible. I've not met anyone who is interested in something like a book written using this thing and the main use case I've seen basically amounts to denial of service attacks with believable bullshit.

Frankly the only people I've seen who are super excited about this stuff are people in the field or the uninformed.

aik|2 years ago

Not sure if you’re technical but the only thing I have to say to this is: Tinker with it yourself. Try different experiments. I’ve built a ton of tools at this point with AI, some have not been very useful in the end and others have made me significantly more productive and effective.

In terms of error rate: gpt 3.5 had a high hallucination rate that made use cases fairly narrow. It then got faster which opened up some more use cases. Then gpt 4 came out that had a significantly smaller hallucination rate which opened up a gigantic number of additional possibilities. And had a larger context window and output size that made it significantly more useful. Then it got faster with an even larger context size… each of these iterative improvements just continue to add more and more possibility in a gigantic range of cases that have literally never existed before.