top | item 47194879

(no title)

jwpapi | 1 day ago

I don’t see any other outcome anymore to be honest, after seeing how humans use AI and how AI works and how providers tune their models.

To me it’s given:

- AI in it’s current state is ruthless in achieving its goal

- Providers tune ruthlessness to get stronger AIs versus the competitor

- Humans can’t evaluate all consequences of the seeds they’ve planted.

Collateral and reckless damage is guaranteed at this point.

Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..

We could stop it, but we wont

discuss

order

Lerc|1 day ago

>AI in it’s current state is ruthless in achieving its goal

I don't believe this to be a trait of any AI model, the model just does the right thing or the wrong thing.

The ruthless maximising of a particular trait is something that happens during training.

It does not follow that a model that is trained to reason will nedsesarily implement this ruthless seeking behaviour itself.

pixl97|1 day ago

No lineage of AI models will be created that cannot achieve goals, they will be outcompeted by models that can.

thegrey_one|1 day ago

>We could stop it

I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.

I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.

It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.

gyomu|1 day ago

Yes, it would be like trying to “stop” gunpowder in 1400 or atomic weapons in 1938. Pandora’s box is open.

jasondigitized|1 day ago

Why does it have to be doom and gloom. Serious question. When we plant seeds they bear fruit and not all fruit is poison.

mrshadowgoose|1 day ago

It's doom and gloom because the underlying game theory forces all state actors into an unbound and irresponsible arms race, consequences be damned.

AI development game theory is extremely similar to the game theory behind nuclear arms development, but worse (nuclear weaponry was born from Human General Intelligence, and is therefore a subset of the potential of AI development). Failing to be the most capable actor could put one in a position of permanent loss of autonomy/agency at the whims of more capable actors.

adamtaylor_13|1 day ago

Not OP, but AI is fundamentally in another category than any other technology before it. It requires moral fortitude to wield in a way that guns and books didn't require. It augments human judgement in a way that needs a moral framework to clearly guide it.

Unfortunately, as a species we seem to be abandoning morality as a general principle. Everything is guided by cold hard rationality rather than something greater than us.

oulipo2|1 day ago

Because it's a fruit governed by humans, in the scope of a capitalistic and patriarchal society. And all fruits planted in a capitalistic and patriarchal society are poison

darkwizard42|1 day ago

The current fruit is automating away a ton of human labor with no foreseeable way to continue to engage that labor. It is poison for the majority of humanity which will bear fruit for the limited few who can use it / own it.

I think that much is fairly clear from AI.

plastic-enjoyer|1 day ago

> Collateral and reckless damage is guaranteed at this point.

It's industrialization and mechanized warfare all over again

4b11b4|1 day ago

AI isn't ruthless, that doesn't even make sense. It's a mathematical model, if it's optimizing for the wrong thing then that's strictly the fault of the people who chose what to optimize for

pixl97|1 day ago

You need to go back and research AI safety long before LLMs were a thing. Any complex goal driven system will have outcomes that cannot be predicted. Saying "it's a mathematical model" belies your ignorance of behavior in complex systems. Very tiny changes in initial conditions can have vastly different outcome in results and you don't have enough entropy in the visible universe to test them all.

IanCal|21 hours ago

Blind optimisers without human qualities like ethics are pretty much the perfect example of what ruthless means.

jwpapi|1 day ago

there might be better words to describe that it doesn’t really has the same boundaries we assume it has.

plagiarist|1 day ago

I love how sci-fi warned us against hyper-competent galaxy brain conscious AI but we are actually going to be killed by confidently wrong stochastic parrots.