Do you know personally some CEO-s? I know a couple and they generally seem less empathic than the general population, so I don't think that like/dislike even applies.
On the other hand, trying to do something "new" is lots of headaches, so emotions are not always a plus. I could make a parallel to doctors: you don't want a doctor to start crying in a middle of an operation because he feels bad for you, but you can't let doctors doing everything that they want - there needs to be some checks on them.
I hear that. Then I try to use AI for simple code task, writing unit tests for a class, very similar to other unit tests. If fails miserably. Forgets to add an annotation and enters in a death loop of bullshit code generation. Generates test classes that tests failed test classes that test failed test classes and so on. Fascinating to watch. I wonder how much CO2 it generated while frying some Nvidia GPU in an overpriced data center.
AI singularity may happen, but the Mother Brain will be a complete moron anyway.
Regularly trying to use LLMs to debug coding issues has convinced me that we're _nowhere_ close to the kind of AGI some are imagining is right around the corner.
Most reasonable AI alarmists are not concerned with sentient AI but an AI attached to the nukes that gets into one of those repeating death loops and fires all the missiles.
To me, the greatest threat is information pollution. Primary sources will be diluted so heavily in an ocean of generated trash that you might as well not even bother to look through any of it.
And it imitates all the unimportant bits perfectly (like spelling, grammar, word choice) while failing at the hard to verify important bits (truth, consistency, novelty)
I see that as the death knell for general search engines built to indiscriminately index the entire web. But where that sort of search fails, opportunities open up for focused search and curated search.
Just as human navigators can find the smallest islands out in the open ocean, human curators can find the best information sources without getting overwhelmed by generated trash. Of course, fully manual curation is always going to struggle to deal with the volumes of information out there. However, I think there is a middle ground for assisted or augmented curation which exploits the idea that a high quality site tends to link to other high quality sites.
One thing I'd love is to be able to easily search all the sites in a folder full of bookmarks I've made. I've looked into it and it's a pretty dire situation. I'm not interested in uploading my bookmarks to a service. Why can't my own computer crawl those sites and index them for me? It's not exactly a huge list.
It’s already been happening but now it’s accelerated beyond belief. I saw a video about how WW1 reenactment photos end up getting reposted away from their original context and confused with original photos to the point it’s impossible to tell unless you can track it back to the source.
Now most of the photos online are just AI generated.
Our best technology at current require teams of people to operate and entire legions to maintain. This leads to a sort of balance, one single person can never go too far down any path on their own unless they convince others to join/follow them. That doesn't make this a perfect guard, we've seen it go horribly wrong in the past, but, at least in theory, this provides a dampening factor. It requires a relatively large group to go far along any path, towards good or evil.
AI reduces this. How greatly it reduces this, if it reduces it to only a handful, to a single person, or even to 0 people (putting itself in charge), seems to not change the danger of this reduction.
Concentrated power is kinda a pre-requisite for anything bad happening, so yes, it's more likely in exactly the same way that given this:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
"Linda is a bank teller" is strictly more likely than "Linda is a bank teller and is active in the feminist movement" — all you have is P(a)>P(a&b), not what the probability of either statement is.
Why does an AI need the ability to "dislike" to calculate that its goals are best accomplished without any living humans around to interfere? Superintelligence doesn't need emotions or consciousness to be dangerous.
I'm not so sure it will be that either, it would be having multiple AIs essentially at war with each other over access to GPUs/energy or whatever the materials are needed to grow if/when that happens. We will end up as pawns in this conflict.
Given that even fairly mediocre human intelligences can run countries into the ground and avoid being thrown out in the process, it's certainly possible for an AI to be in the intelligence range where it's smart enough to win vs humans but also dumb enough to turn us into pawns rather just go to space and blot out the sun with a Dyson swarm made from the planet Mercury.
But don't count on it.
I mean, apart from anything else, that's still a bad outcome.
You can say that, and I might even agree, but many smart people disagree. Could you explain why you believe that? Have you read in detail the arguments of people who disagree with you?
IIRC the original idea was that the machines used our brain capacity as a distributed array but then they decided batteries was easier to understand while been sillier, just burn the carbon they are feeding us, it’s more efficient.
darth_avocado|4 months ago
vladms|4 months ago
On the other hand, trying to do something "new" is lots of headaches, so emotions are not always a plus. I could make a parallel to doctors: you don't want a doctor to start crying in a middle of an operation because he feels bad for you, but you can't let doctors doing everything that they want - there needs to be some checks on them.
ljlolel|4 months ago
surgical_fire|4 months ago
I hear that. Then I try to use AI for simple code task, writing unit tests for a class, very similar to other unit tests. If fails miserably. Forgets to add an annotation and enters in a death loop of bullshit code generation. Generates test classes that tests failed test classes that test failed test classes and so on. Fascinating to watch. I wonder how much CO2 it generated while frying some Nvidia GPU in an overpriced data center.
AI singularity may happen, but the Mother Brain will be a complete moron anyway.
alecbz|4 months ago
bobsmooth|4 months ago
beyarkay|4 months ago
It's well worth looking at https://progress.openai.com/, here's a snippet:
> human: Are you actually conscious under anesthesia?
> GPT-1 (2018): i did n't . " you 're awake .
> GPT-3 (2021): There is no single answer to this question since anesthesia can be administered [...]
troupo|4 months ago
nancyminusone|4 months ago
tobias3|4 months ago
chongli|4 months ago
Just as human navigators can find the smallest islands out in the open ocean, human curators can find the best information sources without getting overwhelmed by generated trash. Of course, fully manual curation is always going to struggle to deal with the volumes of information out there. However, I think there is a middle ground for assisted or augmented curation which exploits the idea that a high quality site tends to link to other high quality sites.
One thing I'd love is to be able to easily search all the sites in a folder full of bookmarks I've made. I've looked into it and it's a pretty dire situation. I'm not interested in uploading my bookmarks to a service. Why can't my own computer crawl those sites and index them for me? It's not exactly a huge list.
Gigachad|4 months ago
Now most of the photos online are just AI generated.
SkyBelow|4 months ago
Our best technology at current require teams of people to operate and entire legions to maintain. This leads to a sort of balance, one single person can never go too far down any path on their own unless they convince others to join/follow them. That doesn't make this a perfect guard, we've seen it go horribly wrong in the past, but, at least in theory, this provides a dampening factor. It requires a relatively large group to go far along any path, towards good or evil.
AI reduces this. How greatly it reduces this, if it reduces it to only a handful, to a single person, or even to 0 people (putting itself in charge), seems to not change the danger of this reduction.
ben_w|4 months ago
mrob|4 months ago
Yoric|4 months ago
navane|4 months ago
preciousoo|4 months ago
yoyohello13|4 months ago
fidotron|4 months ago
ben_w|4 months ago
But don't count on it.
I mean, apart from anything else, that's still a bad outcome.
mmmore|4 months ago
beyarkay|4 months ago
worldsayshi|4 months ago
And also where people believe that others believe it resides. Etc...
If we can find new ways to collectively renegotiate where we think power should reside we can break the cycle.
But we only have time to do this until people aren't a significant power factor anymore. But that's still quite some time away.
pcdevils|4 months ago
noir_lord|4 months ago
prometheus76|4 months ago
beyarkay|4 months ago
antod|4 months ago
alfalfasprout|4 months ago