top | item 45543020

(no title)

thereitgoes456 | 4 months ago

He was wryly communicating, "your argument was so stupid I don't even need to engage with it".

In my experience he has a horrible response to criticism. He's right on the AI stuff, but he responds to both legitimate and illegitimate feedback without much thoughtfulness, usually non-sequitur redirect or ad hominem.

In his defense though, I expect 97% of feedback he gets is Sam Altman glazers, and he must be tired.

discuss

order

nemothekid|4 months ago

He's right on the AI stuff? How do you figure that? As far as I can tell, OpenAI is still operating. It sounds like you agree with him on the AI stuff, but he could be wrong, just like how he was wrong about remote work.

I'm actually more inclined to believe he's wrong if he gets so defensive about criticism. That tells me he's more focused on protecting his ego than actually uncovering the truth.

JohnMakin|4 months ago

The fact that OpenAI is still operating and the argument that it is completely unsustainable are not two incompatible things.

thereitgoes456|4 months ago

I don't think he's right about everything. He is particularly weak at understanding underlying technology, as others have pointed out. But, perhaps by luck, he is right most of the time.

For example, he was the lone voice saying that despite all the posturing and media manipulation by Altman, that OpenAI's for-profit transformation would not work out, and certainly not by EOY2025. He was also the lone voice saying that "productivity gains from AI" were not clearly attributable to such, and are likely make-believe. He was right on both.

Perhaps you have forgotten these claims, or the claims about OpenAI's revenue from "agents" this year, or that they were going to raise ChatGPT's price to $44 per month. Altman and the world have seemingly memory-holed these claims and moved on to even more fantastical ones.

He has never said that OpenAI would be bankrupt, his position (https://www.wheresyoured.at/to-serve-altman/, Jul 2024) is:

I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):

- Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.

- Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.

- Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.

- Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.

- Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.

I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.

He is right about this too. They are doing #2 on this list.

tptacek|4 months ago

Is he right on the AI stuff? Like, on the OpenAI company stuff he could be? I don't know? But on the technology? He really doesn't seem to know what he's talking about.

bigyabai|4 months ago

> But on the technology? He really doesn't seem to know what he's talking about.

That puts him roughly on-par with everyone who isn't Gerganov or Karpathy.