top | item 41125866

(no title)

sweettea | 1 year ago

A well reasoned article that fundamentally downplays both the pace of innovation and the exponential increase in capabilities per dollar happening over time. AI is rapidly accelerating its improvement rate, and I do believe its capabilities will continue growing exponentially.

In particular, GPT-2 to GPT-4 spans an increase from 'well read toddler' to 'average high school student' in just a few years, while simultaneously the computational cost of training less capable models goes down similarly.

Also worth noting: the article claims Stripe, another huge money raiser, had an obviously useful product. gdb, sometime-CTO of stripe and its fourth employee, is now president of OpenAI. And, most of all, the author doesn't remember how nonobvious Stripe's utility was in its early days, even in the tech scene: there were established ways to take people's money and it wasn't clear why Stripe had an offering worth switching to.

For an alternate take, I think https://situational-awareness.ai provides a well reasoned argument for the current status of AI innovation and growth rate, and addresses all of the points here on a general (though not OpenAI specific) way.

discuss

order

lolinder|1 year ago

It's too early to say for sure if LLM capabilities are on an exponential growth function or a sigmoid, but my money is on sigmoid, and my suspicion is that we're plateauing already.

GPT-4 was released 16+ months ago. In that time OpenAI made a cheaper model (which it teased extensively and the media was sure was GPT-5) and its competitors caught up but have not yet exceeded them. OpenAI's now saying that GPT-5 is in progress, but we don't know what it looks like yet and they're not making any promises.

What I'm seeing right now suggests that we're in the optimization stage of the tech as it is currently architected. I expect it to get cheaper and to be used more widely, but barring another breakthrough on the same order as transformers I don't expect it to see the kind of substantial gains in abilities we've hitherto been seeing. If I'm right, OpenAI will quickly be just one of many dealers in commodity tech.

hatefulmoron|1 year ago

> GPT-4 was released 16+ months ago. In that time OpenAI made a cheaper model (which it teased extensively and the media was sure was GPT-5) and its competitors caught up but have not yet exceeded them. OpenAI's now saying that GPT-5 is in progress, but we don't know what it looks like yet and they're not making any promises.

I don't really know anything about business, but something else I've wondered is this: if LLM scaling/progress really is exponential, and the juice is worth the squeeze, why is OpenAI investing significantly in everything that's not GPT-5? Wouldn't exponential growth imply that the opportunity cost of investing in something like Sora makes little sense?

zamadatix|1 year ago

In the real world there is no such thing as having an exponential growth function, they're all really just sigmoidal. The sole question is the latter suspicion: is the plateau now happening now or coming later? With sigmoidal growth it's near impossible to have clarity on that until after the fact.

sirspacey|1 year ago

Anthropic’s Claude 3.5 surpasses GPT-4o in a number of applied tasks, coding in particular.

The progress we’ve seen to date was powered by the ambitions and belief of NVIDIA and LLM companies.

Now, it’s head-to-head competition. It is way too early to call an impending slowdown.

Given how NVIDIA and Meta are leaning in on OSS, the next 18 months are going to be very interesting.

Even if fundamental progress slows, there are many, many secondary problems to solve in using the capabilities we have today that are rapidly improving. As someone deploying AI in business use cases daily, we are just now getting started.

I’d look to when NVIDIA starts to slow down on hardware as an early indicator of a plateau.

AWS is a commodity. When your commodity is compute, there’s a very large amount of growth available.

runako|1 year ago

> the author doesn't remember how nonobvious Stripe's utility was in its early days, even in the tech scene

I have to push back on this. Anybody who had built for B2B credit card acceptance on the Web prior to Stripe's founding knew immediately what a big deal it was. For starters, they let you get up and running the same day. Second, no credit check (and associated delays). Third, their API made sense (as compared to popular legacy providers like Authorize.net) and was easy to integrate using an open source client. Fourth, self-service near-real-time provisioning. Their value proposition was immediately obvious, and they nailed all of these points in their first home page[1].

By contrast, Fee Fighters[2] was innovative for the time but still required me to fax a credit application to them. They got me up and running faster than the legacy provider, which is to say about a week. And I think I only had to talk on the phone with them once or twice. I remember really liking Fee Fighters, but Stripe was in a class of its own.

Stripe was a hit because they promised to solve hard problems that nobody else did, and then they did exactly that. (You still don't have to talk to a rep or do a personal credit check to start using Stripe!)

1 - https://www.quora.com/What-did-the-first-version-of-Stripe-l...

2 - https://en.wikipedia.org/wiki/Feefighters

ashconnor|1 year ago

OpenAI is depending on a *breakthrough* in order to produce a product that will provide a return on investment.

This "breakthrough" is often touted as AGI or something similar to it which to me is even more risky than a nuclear fusion startup as:

1. Fusion has had some recent breakthroughs that could result in a commercially viable reactor eventually.

2. Fusion has a fundamentally sound theoretical basis unlike producing AGI (or something like it).

pzo|1 year ago

even if we don't reach AGI anytime soon there is still many new applications for current AI that we haven't explored too much yet. Robotics will be a huge one IMO.

JackYoustra|1 year ago

Idk, AGI (at least in the information domain) has a sound theoretical basis, the scaling laws seem pretty strong for now and there's a track record of shattering human benchmarks.

gloryjulio|1 year ago

> there were established ways to take people's money and it wasn't clear why Stripe had an offering worth switching to.

That wasn't true at all. Stripe was a product that people were rushing to pay for it for just how good and useful it was. It was an example of success MVP that people want to pay to use and the profitability was not a problem.

The same can't be true for OpenAI. We don't know how long it can stay in the red. Maybe it can survive. Maybe its money will run dry first. We are not so sure at current stage

runako|1 year ago

That last part is a key differentiator between Stripe and OpenAI.

Stripe had high variable costs (staff, COGS of pass-through processing fees) but low fixed costs. OpenAI has enormous fixed (pre-revenue!) costs alongside high variable costs (staff of AI engineers, inference).

Financially, OpenAI looks more like one of the EV startups like Tesla or Rivian than it does a company like Stripe. And where Stripe was competing with relatively stodgy financial institutions, OpenAI is competing with the very biggest, richest companies in the world.

fragmede|1 year ago

if you're here you probably know about Claude and llama3 but for people outside of tech, how many are just going to plug ChatGPT into Google and not venture any further and just plonk down $20?

edzitron|1 year ago

Exactly my point. I took great pains to not say "OpenAI will 100% die without fail," because doing so would be declarative in a way that would wall off my argument, no matter how well I researched and presented it.

Instead, I wanted to show people the terms under which OpenAI survives, and how onerous said terms were. It's deeply concerning - and I do not think that's a big thing to say! - how much money they may be burning, and how much money they will take to survive.

edzitron|1 year ago

Thanks for reading! I downplay it because I fundamentally disagree on the pace of innovation and the exponential increase in capabilities per dollar happening over time. I do not see the rapid acceleration - or at least, they are yet to substantively and publicly show it.

I also think it's a leap of logic to suggest that the former CTO of Stripe joining is somehow the fix they need, or proof they're going to accelerate.

Also, I fundamentally disagree - Stripe was an obvious business. Explaining what Stripe did wasn't difficult. The established ways of taking money were extremely clunky - perhaps there was RELUCTANCE to change, which is a totally fair thing to bring up, but that doesn't mean it wasn't obvious if you thought about it. What's so obvious about GPT? What's the magic trick here?

Anyway, again, thanks for reading, I know you don't necessarily agree, but you've given me a fair read.

x0x0|1 year ago

Claiming Stripe was obvious is ahistorical unless you believe tens of thousands or even millions of entrepreneurs discarded $100B.

You have a small point that anyone who used authorize.net or similar wanted it to be better and that was obvious, but there's nearly infinite things people want to be better. I'd like breakfast, my commute, my car, my doctor, my vet, etc to be better. That you could make a better thing was incredibly non-obvious and that's why no one did.

monero-xmr|1 year ago

If OpenAI can be as good as an outsourced employee at ~$10 per hour, then you should be looking to replace those outsourced employees. The US and EU employees at >$10 per hour, likely with many tens or hundreds of thousands of dollars in compensation, still exist because they provide some sort of value that necessitates that spend.

I am bearish on AI because the nimbleness of humans, even the outsourced ones, is quite capable. If you only want the AI to operate in a box, then you probably can code the decision tree of the box with more specificity and accuracy than a fuzzy AI can provide.

It's a very useful tool, I'm skeptical however about how it can disrupt things economy-wide. I think it can do some things very well, but the value to the market and businesses vs. the cost of training and adapting it to the business need is quite suspicious, at least for this cycle. I think this is one of those "wait 10 years" situations and many AI companies will die within 1 to 3 years.

visarga|1 year ago

> It's a very useful tool, I'm skeptical however about how it can disrupt things economy wide.

It won't disrupt much because we already had "AGI" of a sorts. The internet itself, with billions of people and trillions of pieces of text and media is like a generative model. Instead of generating you search. Instead of LLMs you chat with real people. Instead of Copilot we had StackOverflow and Github. All the knowledge LLMs have is on search engines and social networks, with a few extra steps, and have been for 20 years.

Computers have also gotten a million times faster and more networked. We have automated in software all that we could, we have millions of tools at our disposal, most of them open source. Where did all that productivity go? Why is unemployment so low? The amount of automation possible in code is non-trivial, what can AI do dramatically more than so many human devs put together? Automation in factories is already old, new automation needs to raise the bar.

It seems to me AI will only bring incremental change, an evolution rather than revolution. AI operates like "internet in a box", not something radically new. My yet unrealized hope is that assisting hundreds of millions of users, LLMs will accumulate some kind of wisdom, and they will share back that wisdom at an accelerated speed. An automated open sourcing of problem solving expertise.

consteval|1 year ago

> If you only want the AI to operate in a box, then you probably can code the decision tree of the box with more specificity and accuracy than a fuzzy AI can provide.

I've been saying this for the past couple of years. Yes AI is cool, but we already have computers and computer programs. Things that can be solved algorithmically, SHOULD be solved algorithmically. Because you WANT your business rules and logic to be as predictable and reliable as possible. You want to lessen liability, complexity, and amount of possible outcomes.

We already even see this with human customer support. They follow a script and flowchart. They're just glorified algorithms. Despite being human, they're actively told to not be creative, not think, and act as a computer. Because, as it turns out, from a business perspective that's usually very advantageous (where you can do it).

AI would never, or should never, replace those types of tasks.

imtringued|1 year ago

AI isn't something warming a seat sitting in front of a computer reading emails and responding to them. For a technology to replace a human, it would have to be a drop in replacement in all aspects of what it means to be human.

peteforde|1 year ago

Stripe's ease of integration, pace of innovation and user flow experience was a game-changing upgrade from the brutal payment processing options that we were forced to use until it was available. Nobody building payment processing at the time needed to be convinced that Stripe was awesome. They solved a hair on fire problem, especially for Canadians. The hoops we used to have to jump through were bonkers.

visarga|1 year ago

> exponential increase in capabilities per dollar happening

logarithmic, the capabilities increase with log(cost), what grows exponentially is compute used over time

ajkjk|1 year ago

Meh. I'm waiting to see evidence that this exponential growth means anything. Every day I read something like the statement that it's an "average high school student" now, or a med student or a law student or whatever, yet it seems obvious that the difference between it and a high school student is that a high school student can think at a basic level and it can't. So still waiting to see evidence that the exponential growth is on that axis and not the "bullshit on more complicated subjects" axis.

_xiaz|1 year ago

>AI is rapidly accelerating its improvement rate Is it? All I see are desperate AI companies slapping on multi-modality because text generation is nearly peaked

joshstrange|1 year ago

> And, most of all, the author doesn't remember how nonobvious Stripe's utility was in its early days, even in the tech scene: there were established ways to take people's money and it wasn't clear why Stripe had an offering worth switching to.

I think you are misremembering. Stripe was a _big deal_. They had a curl call on their home page for a while for how to take a payment IIRC. It was like how Twilio opened the door for anyone to send SMS, Stripe made it stupid-easy to handle payments online. Nothing else at the time compared in terms of simplicity and clearly defined fees.

riku_iki|1 year ago

> In particular, GPT-2 to GPT-4 spans an increase from 'well read toddler' to 'average high school student'

GPT-2 was indeed much smaller and weaker model. But the question do we have "exponential" boost after GPT3, or just marginal while competition commoditized this vertical.

ThereIsNoWorry|1 year ago

That's a lot of unproven assumptions based on the fact that LLMs are just correlation printers.

vrighter|1 year ago

ai is very rapidly hitting a plateau, both because they are just glorified markov chains, and them running out of data anyway

ggm|1 year ago

I think you don't understand the meaning of the word exponential. By now, at the intent expressed here, we'd all be in the Kurzeweil singularity if this was correct.

Hint: it's not correct. It's nothing like exponential. It's not even order of magnitude stuff. It's tiny increments, to a system which fundamentally is a bit of a dead end.

lostmsu|1 year ago

I love how pointless these talks are.

What does it even mean for a value whose scale is not defined to be logarithmic, quadratic, or exponential?

You guys are threading water about nothing.