top | item 42545085

(no title)

phodo | 1 year ago

One view is that it’s not first mover, but first arriver advantage. Whoever gets to AgI (the fabled city of gold or silver?, Ag pun intended) will achieve exponential gains from that point forward, and that serves as the moat in the limit. So you can think of it as buying a delayed moat, with an option price equivalent to the investment required until you get to that point in time. Either you believe in that view or you don’t. It’s more of an emotional / philosophical investment thesis, with a low probability of occurrence, but with a massive expected value.Meanwhile, consumers and the world benefit.

discuss

order

nick3443|1 year ago

What if the AGI takes an entire data center to process a few tokens per second. Is the still a first-arriver advantage? Seems like the first to make it cheaper than an equivalent-cost employee (fully loaded incl hiring and training) will begin to see advantage.

usrusr|1 year ago

What if the next one to get there produces a similar service for 5% less? Race to the bottom.

And would AI that is tied to some interface that provides lock-in even be qualified to be called general? I have trouble pointing my finger on it, but AGI and lock-in causes a strong dissonance in my brain. Would AGI perhaps strictly imply commodity? (assuming that more than one supplier exists)

scarmig|1 year ago

Depending on how powerful your model is, a few tokens per second per data center would still be extraordinarily valuable. It's not out of the realm of possibility that a next generation super intelligence could be trained with a couple hundred lines of pytorch. If that's the case, a couple tokens per second per data center is a steal.

phodo|1 year ago

Good point. It’s 2 conditions and both have to be true : - Arrive first - Use that first arrival to innovate with your new AGI pet / overlord to stay exponentially ahead

Scaevolus|1 year ago

Exponential gains from AGI requires recursive self improvement and the compute headroom to realize them. It's unclear if current LLM architectures make either of those possible.

esafak|1 year ago

People need to stop talking about "exponential" gains; these models don't even have the ability to improve themselves, let alone at this or that rate. And who wants them to be able to train themselves while being connected to the Internet anyway? I sure don't. All it takes for major disruption is superhuman ability at subhuman prices.

jhanschoo|1 year ago

What does AGI even mean in this case? If progress toward more capable and more cost-effective agents is incremental, I don't see a defensible moat. (You can maintain a moat given continued outpaced investment, but following remains more cost-effective)

aoeusnth1|1 year ago

Since we're talking about the economic impact here, AGI(X) could be defined as being able to do X% of white collar jobs independently with about as much oversight as a human worker would need.

The exponential gains would come from increasing penetration into existing labor forces and industrial applications. The first arriver would have an advantage in being the first to be profitably and practically applicable to whatever domain it's used in.

cs702|1 year ago

Yes, "first to arrive at AGI" could indeed become a moat, if OpenAI can get there before the clock runs out. In fact, that's what's driving all the spending.

dartos|1 year ago

None of that would matter if they could find the holy grail though.

throwup238|1 year ago

Every time a new model comes out I ask it to locate El Dorado or Shangri-La for me. That’s my criteria for AGI/ASI.

Alas I am still without my mythical city of gold.