top | item 45583905

(no title)

xutopia | 4 months ago

The most likely danger with AI is concentrated power, not that sentient AI will develop a dislike for us and use us as "batteries" like in the Matrix.

discuss

order

darth_avocado|4 months ago

The reality is that the CEO/executive class already has developed a dislike for us and is trying to use us as “batteries” like in the Matrix.

vladms|4 months ago

Do you know personally some CEO-s? I know a couple and they generally seem less empathic than the general population, so I don't think that like/dislike even applies.

On the other hand, trying to do something "new" is lots of headaches, so emotions are not always a plus. I could make a parallel to doctors: you don't want a doctor to start crying in a middle of an operation because he feels bad for you, but you can't let doctors doing everything that they want - there needs to be some checks on them.

ljlolel|4 months ago

CEOs (even most VCs) are labor too

surgical_fire|4 months ago

"AI will take over the world".

I hear that. Then I try to use AI for simple code task, writing unit tests for a class, very similar to other unit tests. If fails miserably. Forgets to add an annotation and enters in a death loop of bullshit code generation. Generates test classes that tests failed test classes that test failed test classes and so on. Fascinating to watch. I wonder how much CO2 it generated while frying some Nvidia GPU in an overpriced data center.

AI singularity may happen, but the Mother Brain will be a complete moron anyway.

alecbz|4 months ago

Regularly trying to use LLMs to debug coding issues has convinced me that we're _nowhere_ close to the kind of AGI some are imagining is right around the corner.

bobsmooth|4 months ago

Most reasonable AI alarmists are not concerned with sentient AI but an AI attached to the nukes that gets into one of those repeating death loops and fires all the missiles.

beyarkay|4 months ago

Given that AI couldn't even speak English 6 years ago, do you really think it's going to struggle with unit tests for the next 20 years?

It's well worth looking at https://progress.openai.com/, here's a snippet:

> human: Are you actually conscious under anesthesia?

> GPT-1 (2018): i did n't . " you 're awake .

> GPT-3 (2021): There is no single answer to this question since anesthesia can be administered [...]

troupo|4 months ago

"Just one more prompt, bro", and your problems will be solved.

nancyminusone|4 months ago

To me, the greatest threat is information pollution. Primary sources will be diluted so heavily in an ocean of generated trash that you might as well not even bother to look through any of it.

tobias3|4 months ago

And it imitates all the unimportant bits perfectly (like spelling, grammar, word choice) while failing at the hard to verify important bits (truth, consistency, novelty)

chongli|4 months ago

I see that as the death knell for general search engines built to indiscriminately index the entire web. But where that sort of search fails, opportunities open up for focused search and curated search.

Just as human navigators can find the smallest islands out in the open ocean, human curators can find the best information sources without getting overwhelmed by generated trash. Of course, fully manual curation is always going to struggle to deal with the volumes of information out there. However, I think there is a middle ground for assisted or augmented curation which exploits the idea that a high quality site tends to link to other high quality sites.

One thing I'd love is to be able to easily search all the sites in a folder full of bookmarks I've made. I've looked into it and it's a pretty dire situation. I'm not interested in uploading my bookmarks to a service. Why can't my own computer crawl those sites and index them for me? It's not exactly a huge list.

Gigachad|4 months ago

It’s already been happening but now it’s accelerated beyond belief. I saw a video about how WW1 reenactment photos end up getting reposted away from their original context and confused with original photos to the point it’s impossible to tell unless you can track it back to the source.

Now most of the photos online are just AI generated.

SkyBelow|4 months ago

I agree.

Our best technology at current require teams of people to operate and entire legions to maintain. This leads to a sort of balance, one single person can never go too far down any path on their own unless they convince others to join/follow them. That doesn't make this a perfect guard, we've seen it go horribly wrong in the past, but, at least in theory, this provides a dampening factor. It requires a relatively large group to go far along any path, towards good or evil.

AI reduces this. How greatly it reduces this, if it reduces it to only a handful, to a single person, or even to 0 people (putting itself in charge), seems to not change the danger of this reduction.

ben_w|4 months ago

Concentrated power is kinda a pre-requisite for anything bad happening, so yes, it's more likely in exactly the same way that given this:

  Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
"Linda is a bank teller" is strictly more likely than "Linda is a bank teller and is active in the feminist movement" — all you have is P(a)>P(a&b), not what the probability of either statement is.

mrob|4 months ago

Why does an AI need the ability to "dislike" to calculate that its goals are best accomplished without any living humans around to interfere? Superintelligence doesn't need emotions or consciousness to be dangerous.

Yoric|4 months ago

It needs to optimize for something. Like/dislike is an anthropomorphization of the concept.

navane|4 months ago

The power concentration is already massive, and a huge problem indeed. The ai is just a cherry on top. The ai is not the problem.

preciousoo|4 months ago

Seems like a self fulfilling prophecy

yoyohello13|4 months ago

Definitely not ‘self’ fulfilling. There are plenty of people actively and vigorously working to fulfill that particular reality.

fidotron|4 months ago

I'm not so sure it will be that either, it would be having multiple AIs essentially at war with each other over access to GPUs/energy or whatever the materials are needed to grow if/when that happens. We will end up as pawns in this conflict.

ben_w|4 months ago

Given that even fairly mediocre human intelligences can run countries into the ground and avoid being thrown out in the process, it's certainly possible for an AI to be in the intelligence range where it's smart enough to win vs humans but also dumb enough to turn us into pawns rather just go to space and blot out the sun with a Dyson swarm made from the planet Mercury.

But don't count on it.

I mean, apart from anything else, that's still a bad outcome.

mmmore|4 months ago

You can say that, and I might even agree, but many smart people disagree. Could you explain why you believe that? Have you read in detail the arguments of people who disagree with you?

beyarkay|4 months ago

Given that they both seem pretty bad, it seems wrong to not consider them both dangerous and make plans for both of them?

worldsayshi|4 months ago

> power resides where men believe it resides

And also where people believe that others believe it resides. Etc...

If we can find new ways to collectively renegotiate where we think power should reside we can break the cycle.

But we only have time to do this until people aren't a significant power factor anymore. But that's still quite some time away.

pcdevils|4 months ago

For one thing, we'd make shit batteries.

noir_lord|4 months ago

IIRC the original idea was that the machines used our brain capacity as a distributed array but then they decided batteries was easier to understand while been sillier, just burn the carbon they are feeding us, it’s more efficient.

prometheus76|4 months ago

They farm you for attention, not electricity. Attention (engagement time) is how they quantify "quality" so that it can be gamed with an algorithm.

beyarkay|4 months ago

The Matrix only had people being batteries because a movie without humans in it isn't a fun movie to watch.

antod|4 months ago

Sounds about right, most of us already are. But why would the AI need our shit? Surely it wants electricity?

alfalfasprout|4 months ago

I mean, you can't really disprove either being an issue.