top | item 35156261

(no title)

cjrd | 3 years ago

Let's check out the paper for actual tech details!

> Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

- OpenAI

discuss

order

shpx|3 years ago

I've chosen to re-interpret "Open" as in "open the box to release the AI"/"open Pandora's box"/"unleash".

awesomeMilou|3 years ago

I've chosen to reinterpret it exactly as the kind of Orwellian 1984'ish double-speak that it is.

xvector|3 years ago

Someone needs to hack into them and release the parameters and code. This knowledge is too precious to be kept secret.

SXX|3 years ago

Don't worry. CCP and all kind of malicious state actors already have a copy.

jryan49|3 years ago

Very open! :)

dx034|3 years ago

At least they opened up the product. It's available for anyone paying $20 per month and soon via API. Historically, most products of that kind were just aimed at large B2B. They announced partnerships with Duolingo, JPMorgan and a few others but still keep their B2C product.

Not defending their actions, but it's not that common that new very valuable products are directly available for retail users to use.

toriningen|3 years ago

This might be wild conspiracy, but what if OpenAI has discovered a way to make these LLMs a lot cheaper than they were? Transformer hype started with the invention of self-attention - perhaps, they have discovered something that beats it so hard, as GPTs beat Markov chains?

They cannot disclose anything, since it would make it apparent that GPT-4 cannot have a number of parameters that low, or the gradients would have faded out on the network that deep, and so on.

They don't want any competition, obviously, but with their recent write-up on "mitigating disinformation risks", where they propose to ban non-governmental consumers from having GPUs at all (as if regular Joe could just run 100'000 A100s in his garage), so perhaps this means the lowest border for inference and training is a lot lower than we have thought and assumed?

Just a wild guess...