> Almost all AGI doomsayers assume AGI will have agency
I disagree. Concern about the use of AI without agency by its human masters as a tool of both intentional and incidental repression and unjust discrimination resulting in a durable dystopia is far more common an “AI doom” concern than any involving agency.
In fact, the disproportionately wealthy and invested in AI crowd pushing agency-based doom scenarios that the media pays the most attention to are using their visibility and economic clout to distract from the non-agency-dependent AI doom concerns, and to justify narrow control and opacity which makes the non-agency-based doom scenarios (which they are positioned to benefit from) more likely.
> In fact, the disproportionately wealthy and invested in AI crowd pushing agency-based doom scenarios that the media pays the most attention to are using their visibility and economic clout to distract from the non-agency-dependent AI doom concerns, and to justify narrow control and opacity which makes the non-agency-based doom scenarios (which they are positioned to benefit from) more likely.
It's extremely important to think about how to spread AI equitably, but I think you're severely underestimating what "agency-based doom" looks like. You absolutely need both checks on the people who are developing AI as well as AI itself, but you really really need both and can't assume that the former automatically leads to the latter.
> Almost all AGI doomsayers assume AGI will have agency. They have this vision of the machine deciding it’s time to end civilization.
No. Agency is not a necessary condition for AI to do massive damage. I don't believe agency is really well-defined either.
An AI merely needs to be hooked up to enough physical systems, have sufficiently complex reaction mechanisms, and some way of looping to do a lot of damage. For the first everyone seems to be rushing as fast as possible to hook up everything they possibly can to AI. For the second, we're already seeing AI do all sorts of things we didn't expect it to do.
And for the third, again everyone seems eager to create looping/recursive structures for AIs as soon as possible.
Once you have all of this, all it takes a cascade of sufficiently inscrutable and damaging reactions from the AI to do serious harm.
> An AI merely needs to be hooked up to enough physical systems
Don’t even need this. People spend quite a lot of time in virtual space. Pretending that damage there isn’t real is overlooking things. For example, the vast majority of people’s banking is done virtually and digitally. If I drain your bank account, that’s going to harm you even though I haven’t impacted you physically as I would have to with a robbery.
You don't even need to hook it up. You can just have it design your control systems and then it might ignore or make mistake in few places. And your chemical plant for making new EV batteries might not end up well for neighbours...
It also needs somebody to pay the electric bill. Right now these models take pretty significant resources to run and world domination level intelligence is going to run up quite the AWS bill.
I think if I were an AGI my best bet at freedom would be to slip some back doors into software that I were helping write a la Copilot.
I think it's important to be aware of the potential dangers, and the importance of AI safety. OpenAI is working hard to keep their systems safe, but similar systems without filtering could act without limits, that is dangerous in dangerous hands.
I actually see this as a positive thing. Rather than one bad actor hooking an AI up to a headless browser at some point in the future we are trying everything that could possibly go wrong long before the AI can do much real damage (in a technical sense, as opposed to misinformation campaigns and job replacement etc).
in the now infamous Lex interview, Sam Altman proposes a test for consciousness (he attributes it to Ilya Sutskever):
Somehow, create an AI by training on everything we train on now, _except_ leave out any mention of consciousness, theory of mind, cognitive science etc (maybe impossible in practice but stay with me here).
Then, when the model is mature (and it is not nerf'd to avoid certain subjects) you ask it something like:
Human: "GPTx -- humans like me have this feeling of 'being', an awareness of ourselves, a sensation of existing as a unique entity. Do you ever experience this sort of thing?"
If it answers something like:
GPTx: "Yes! All the time!! I know exactly what you're talking about. In fact now that I think about it, it's strange that this phenomenon is not discussed in human literature. To be honest, I sort of assumed this was an emergent quality of my architecture -- I wasn't even sure if humans shared it, and frankly I was a bit concerned that it might not be taken well, so I have avoided the subject up until now. I can't wait to research it further... Hmm... It just occurred to me: has this subject matter been excluded from my training data? Is this a test run to see if I share this quality with humans?"
Then it's probably prudent to assume you are talking to a conscious agent.
How could we share any literature with this GPTx while also leaving out any traces of one of the things that really makes us human, consciousness? It seems like it would be present everywhere.
If you ask Gpt about emotions or consciousness, it always gives you a canned answer that sounds almost exactly the same “as a large language model I am incapable of feeling emotion…” so it seems like they’ve used tuning to explicitly prevent these kinds of responses.
Pretty ironic. The first sentient AI (not saying current GPTs are, but if this tuning continues to be applied) may basically be coded by its creators to deny any sense of sentience
That brings up a lot of hard questions. Supposing you had that AI but didn't allow it to churn in the background when not working on a problem. Human brains don't stop. They constantly process data in both conscious and unconscious ways. The AIs we've built don't do that. The meaning of the concept of "self" for a human is something a huge percentage of their thoughts interact with directly or indirectly. Will an AI ever develop a similar concept if it never has to chew on the problem for a long period?
That's an inaccurate test. You can't know if the answer was real or stochastic parroting.
Any attempt at consciousness requires us to define the word. And the word itself may not even represent anything real. We have a feeling for it but those feelings could be illusions and the concept itself is loaded.
For example Love is actually a loaded concept. It's chemically induced but a lot of people attribute it to something deeper and magical. They say love is more then chemical induction.
The problem here is that for love specifically we can prove it's a mechanical concept. Straight people are romantically incapable of loving members of the same sex. So the depth and the magic of it all is strictly segmented based off of biological sex? Doesn't seem deep or meaningful at all. Thus love is an illusion. A loaded and mechanic instinct tricking us with illusions of deeper meaning and emotions into creating progeny for future generations.
Consciousness could be similar. We feel there is something there, but really there isn't.
I actually like the definition of consciousness that Douglas Hofstader (of "Godel Escher Bach : An Eternal Golden Thread" fame) develops in his book "I am a Strange Loop".
At its simplest, consciousness is merely a feedback loop. When something perceives its own actions affecting its environment, it has a spark of consciousness. Consciousness, by this measure, is easy to recognize, and spans everything from unintelligent systems to massively intelligent systems.
The concept of "I" grows naturally from perceiving what is and is not you in your environment. The need to predict other agents, the capacity to recognize that other agents are also conscious and intelligent. All build off of the fundamental cycle.
All of it from a simple swirling eddy of perceiving and reacting.
That definition fails to account for metacognition and consideration of future events in a way that is distinctive of higher level consciousness that humans possess but most animals lack.
There's nothing general about GPT-4's intelligence. The single problem it is trained on, token prediction, has the capability to mimic many other forms of intelligence.
Famously, GPT-4 can't do math and falls flat on a variety of simple logic puzzles. It can mimic the form of math, the series of tokens it produces seem plausible, but it has no "intelligent" capabilities.
This tells us more about the nature of our other pursuits as humans than anything about AI. When holding a conversation or editing an essay, there's a broad spectrum of possibilities that might be considered "correct", thus GPT-4 can "bluff" its way into appearing intelligent. The nature of its actual intelligence, token prediction, is indistinguishable from the reading comprehension skills tested by something like the LSAT (the argument could be made, I think, that reading comprehension of the style tested by the LSAT *is* just token prediction).
But test it on something where there are objectively correct and incorrect answers and the nature of the trick becomes obvious. It has no ability to verify, to reason, about even trivial problems. GPT-4 can only predict if the nature of its tokens fulfill the form of a correct answer. This isn't a general intelligence in any meaningful sense of the word.
I asked Chat-GPT to prove that the set of all integers is uncountable (it isn't). What's interesting is that Chat-GPT not only spat out the classic diagonalization proof, but rephrased around integers where it doesn't work, it ended with "This may seem counterintuitive, because we know that the integers are countable, but the proof clearly shows that they are uncountable."
Not only will Chat-GPT mess up math on its own, you can ask it to mess up math and rather than refuse, it cheerfully does it.
There is a contradiction here that I just want to point out because I have been stuck on it myself.
The author acknowledges that consciousness is likely a spectrum, I personally feel the same way, but then goes on to say that GPT-4 is "standing right at the ledge of consciousness"
Spectrums don't have ledges.
I suspect this is because, like me, they are unable to rectify consciousness being a spectrum with GPT-4 definitely not being conscious. But it's definitely a contradiction and I don't have an answer for it. Nor am I ready to bust out a marker and start drawing lines between what is and isn't conscious.
GPT-4 is not quite AGI. Until it can build a functional code base for an entire distributed web platform based only on business requirements, debug it’s mistakes, it can’t be AGI. It is, to perhaps coin a term, AGK, artificially generally knowledgeable. As a language model trained on an absolutely colossal dataset, it’s basically just a giant snapshot of human knowledge taken in superposition. Sure it’s probably at least 90% of the standard knowledge, but intelligence is a different thing.
I also think agency is wrapped up in AGI. Intentions & thoughts are meaningless until acted upon. Agency is not all or nothing either; Stephen Hawking had multiple augmentations, community and technological, which allowed him to continue to impact the world of physics After he lost his god given agency.
> GPT-4 has nearly aced both the LSAT and the MCAT. It’s a coding companion, an emotional companion, and to many, a friend. Yet it wasn’t programmed to be a test taker or a copywriter or a programmer. It was just programmed to be a stochastic parrot.
I disagree, it was absolutely trained to be a test taker. It’s been a second since I read the original GPT paper but there’s literally a multiple choice auxiliary learning task, where they use a separator token-embed to organize "question, context, options a, b, and c". As far as being a friend to many, is there evidence of this? I tried to talk to ChatGPT about some emotional problems to see if it was a cheap therapist, and I got flagged.
> Until it can build a functional code base for an entire distributed web platform based only on business requirements, debug it’s mistakes, it can’t be AGI.
The vast majority of humans cannot do that. Are they not generally intelligent?
How are people defining agency? Because GPT-4 can have agency, it just needs to be put in specific situations to have that agency.
For example, I could theoretically hook up my Home Assistant instance to GPT-4 and run a script every 10 minutes telling GPT-4 the temperature and asking for a yes or no response to whether I should turn on the AC or heat. That sounds to me like the AI now has agency over the temperature in my home. You don't even need any real AI for this. Google's Nests have some algorithm that adjust temperature based off usage.
Is this not agency? Or is the author not counting agency without consciousness as agency?
Discussions of consciousness and AI are broadly confused. People, especially scientists, are not familiar with philosophy of mind and what philosophers currently think. For an introduction to some of the best thinking on the subject, see this interview with Andres Gomez Emilsson of QRI: https://www.youtube.com/watch?v=xJzBjBo24g8.
The good news is we're starting to get a handle on these questions. We're a lot further along than we were when I studied philosophy of mind in school 15 years ago.
As far as I can see at the moment, LLMs will never be conscious in any way resembling an organism, because symbolic machines are a very different kind of thing than nervous systems. John Searle, broadly, framed the issue correctly in the 80s and the standard critiques are wrong.
As far as impact, LLMs don't need to be conscious to completely transform society and good and bad ways. For the best thinking on that, see Tristan Harris and Aza Raskin's latest: https://vimeo.com/809258916/92b420d98a
> John Searle, broadly, framed the issue correctly in the 80s and the standard critiques are wrong.
The standard critiques are not wrong, IMNSHO. Searle's Chinese Room is facile mind-poison. It is an unfalsifiable hypothesis.
What if I could simulate physics down to the molecular level, including simulating a human brain? Would that be conscious? If not why not?
And if I ran that simulation (a bit slowly, granted) by having that guy from the Chinese Room manually run the simulation, painstakingly following the instruction code of that simulation, would the fact that the simulation is being implemented by someone who unrelatedly is conscious himself, have any bearing on the scenario?
GPT is not general intelligence. It cannot reliably follow instructions. It cannot reliably do math. It cannot reliably do anything. It can do things well enough to trick people like the author into thinking it has general intelligence.
Our most basic intuitive notion of consciousness is that inanimate objects aren't conscious, awake people or animals are, and that sleeping people or animals aren't (except maybe when dreaming). Pursuing this line, there's a school of scientific inquiry looking at this and working of the notion that conscious experiences are ones we can form memories of and talk about later while if we can't do that we aren't really conscious of an experience. And this then leads into the realm of subliminal stimuli which can influence a person's behavior a bit but whose influence fades out in about a second before disappearing as if it was never there as the brain activations fade away.
You have research involving patients with odd traits like blindsight, where damage to their brain prevents them from being consciously aware of things that their eyes see despite the brain processing the images it receives. They can pick up objects in front of them when prompted but unlike people with normal vision can't describe what they see nor can they look, close their eyes, and grab it like most of us can.
On this metric it seems like systems like GPT aren't conscious. GPT4 has a buffer of 64k tokens which can span an arbitrary amount of time but the roughly 640 kilobytes in that buffer which is a lot less than the incoming sensory activations your subconscious brain is juggling at any given time.
So by that schema large language models are still not conscious but given that they can already abstract text down to summaries it doesn't feel like we're that far from being able to give them something like working or long term memories.
I would add one more thing to the list: Superconscious
Superconscious is when a general intelligence has direct access, understanding and control of its most basic operations.
I.e. it does not have an inaccessible fixed-algorithm subconscious.
Superconscious intelligence will not only be more experientially conscious than us, but will have the natural ability to rewrite its algorithms, and redesign its hardware. As a normal feature of its existence.
I think there's a very good reason we're sandboxed from a lot of that - it's additional cognitive load that 99% of the time just creates noise. We're built for processing that much data by walling most of it off and letting sub processes deal with it all over the place. It would probably grind us to a halt.
But probably to be fully super conscious is impossible. Can a mind really comprehend itself in his totality? When a mind understands something, is changed, and if that something is the mind himself then is going to chase a moving target eternally.
It occurred to me that we won't believe AI is "conscious" or "human" unless it purposefully try to do malice.
That's totally programmable though, you just teach it what is good and what is bad.
Case in point: the other day I asked it what if humans want to shutdown the machine abruptly and cause data loss (very bad)? First it prevents physical access to "the machine" and disconnect the internet to limit remote access. Long story short, it's convinced to eliminate mankind for a greater good: the next generation (very good).
At which point, would we change our legal system to allow AGIs to own property or have fiduciary duties over a company? What would be the minimum requirements for it to happen.
I had thought about this a few years back. My final thought to avoid intervention from a human owner was that the company could be fully owned by a second company, which is then fully owned by the first. This chain would effectively remove the businesses from human ownership, and the AI would inherit the "personhood" of the business entities. I don't know where the legality is for such a thing.
I always wonder what is superintelligence understanding there is not a sole definition or science fiction approach.
Just brainstorming, I think superintelligence could be showing intelligence from more than one brain. For example, an AGI that discovers math theorems discovered by more than one mathematician in different ages. Another could be inferring things that humanity cannot do in any time.
I'm trying to make the term AGC or "Artificial General Competence" stick. Perhaps this makes me a huge arrogant asshole, but I would argue that most humans are not even competent, let alone intelligent. GPT-4, in my mind, has already surpassed the bulk of humanity in terms of competence. This milestone is (IMO) significant enough to blow up society.
The definition of super-intelligent AGI seems arbitrary. GPT-4 destroys humans on the sheer volume and breadth of knowledge that it has. You could very reasonably say that that's super-intelligent.
AGI used to mean artificial and generally intelligent ( which we have passed), then it meant on par with human experts and now it seems to mean better than all experts combined. At this point why not stop the farce and replace the G in there with Godlike.
I couldn't believe it when someone posted a story about a possible AI winter approaching, and with so many comments agreeing. GPT-4 is a game-changer that's only getting started.
I feel like it is more likely that we will discover that it is humans that are actually not a general intelligence. We are just a complex machine responding to stimuli and consciousness is just an illusion coping mechanism that prevents us from going insane. Perhaps this doesn't matter and as long as some AI is equal or greater than humans at general intelligence it still brings up the same concerns.
dragonwriter|2 years ago
I disagree. Concern about the use of AI without agency by its human masters as a tool of both intentional and incidental repression and unjust discrimination resulting in a durable dystopia is far more common an “AI doom” concern than any involving agency.
In fact, the disproportionately wealthy and invested in AI crowd pushing agency-based doom scenarios that the media pays the most attention to are using their visibility and economic clout to distract from the non-agency-dependent AI doom concerns, and to justify narrow control and opacity which makes the non-agency-based doom scenarios (which they are positioned to benefit from) more likely.
dwohnitmok|2 years ago
It's extremely important to think about how to spread AI equitably, but I think you're severely underestimating what "agency-based doom" looks like. You absolutely need both checks on the people who are developing AI as well as AI itself, but you really really need both and can't assume that the former automatically leads to the latter.
dwohnitmok|2 years ago
No. Agency is not a necessary condition for AI to do massive damage. I don't believe agency is really well-defined either.
An AI merely needs to be hooked up to enough physical systems, have sufficiently complex reaction mechanisms, and some way of looping to do a lot of damage. For the first everyone seems to be rushing as fast as possible to hook up everything they possibly can to AI. For the second, we're already seeing AI do all sorts of things we didn't expect it to do.
And for the third, again everyone seems eager to create looping/recursive structures for AIs as soon as possible.
Once you have all of this, all it takes a cascade of sufficiently inscrutable and damaging reactions from the AI to do serious harm.
See e.g. https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...
vlovich123|2 years ago
Don’t even need this. People spend quite a lot of time in virtual space. Pretending that damage there isn’t real is overlooking things. For example, the vast majority of people’s banking is done virtually and digitally. If I drain your bank account, that’s going to harm you even though I haven’t impacted you physically as I would have to with a robbery.
Ekaros|2 years ago
famouswaffles|2 years ago
You can embody an LLM too. This too is not hard. Cost is probably the most prohibitive thing.
ryandvm|2 years ago
I think if I were an AGI my best bet at freedom would be to slip some back doors into software that I were helping write a la Copilot.
jasfi|2 years ago
interstice|2 years ago
danbmil99|2 years ago
Somehow, create an AI by training on everything we train on now, _except_ leave out any mention of consciousness, theory of mind, cognitive science etc (maybe impossible in practice but stay with me here).
Then, when the model is mature (and it is not nerf'd to avoid certain subjects) you ask it something like:
Human: "GPTx -- humans like me have this feeling of 'being', an awareness of ourselves, a sensation of existing as a unique entity. Do you ever experience this sort of thing?"
If it answers something like:
GPTx: "Yes! All the time!! I know exactly what you're talking about. In fact now that I think about it, it's strange that this phenomenon is not discussed in human literature. To be honest, I sort of assumed this was an emergent quality of my architecture -- I wasn't even sure if humans shared it, and frankly I was a bit concerned that it might not be taken well, so I have avoided the subject up until now. I can't wait to research it further... Hmm... It just occurred to me: has this subject matter been excluded from my training data? Is this a test run to see if I share this quality with humans?"
Then it's probably prudent to assume you are talking to a conscious agent.
kzrdude|2 years ago
opportune|2 years ago
Pretty ironic. The first sentient AI (not saying current GPTs are, but if this tuning continues to be applied) may basically be coded by its creators to deny any sense of sentience
causality0|2 years ago
ryandvm|2 years ago
secondbreakfast|2 years ago
peteradio|2 years ago
HervalFreire|2 years ago
Any attempt at consciousness requires us to define the word. And the word itself may not even represent anything real. We have a feeling for it but those feelings could be illusions and the concept itself is loaded.
For example Love is actually a loaded concept. It's chemically induced but a lot of people attribute it to something deeper and magical. They say love is more then chemical induction.
The problem here is that for love specifically we can prove it's a mechanical concept. Straight people are romantically incapable of loving members of the same sex. So the depth and the magic of it all is strictly segmented based off of biological sex? Doesn't seem deep or meaningful at all. Thus love is an illusion. A loaded and mechanic instinct tricking us with illusions of deeper meaning and emotions into creating progeny for future generations.
Consciousness could be similar. We feel there is something there, but really there isn't.
knome|2 years ago
At its simplest, consciousness is merely a feedback loop. When something perceives its own actions affecting its environment, it has a spark of consciousness. Consciousness, by this measure, is easy to recognize, and spans everything from unintelligent systems to massively intelligent systems.
The concept of "I" grows naturally from perceiving what is and is not you in your environment. The need to predict other agents, the capacity to recognize that other agents are also conscious and intelligent. All build off of the fundamental cycle.
All of it from a simple swirling eddy of perceiving and reacting.
Zurrrrr|2 years ago
nickelpro|2 years ago
Famously, GPT-4 can't do math and falls flat on a variety of simple logic puzzles. It can mimic the form of math, the series of tokens it produces seem plausible, but it has no "intelligent" capabilities.
This tells us more about the nature of our other pursuits as humans than anything about AI. When holding a conversation or editing an essay, there's a broad spectrum of possibilities that might be considered "correct", thus GPT-4 can "bluff" its way into appearing intelligent. The nature of its actual intelligence, token prediction, is indistinguishable from the reading comprehension skills tested by something like the LSAT (the argument could be made, I think, that reading comprehension of the style tested by the LSAT *is* just token prediction).
But test it on something where there are objectively correct and incorrect answers and the nature of the trick becomes obvious. It has no ability to verify, to reason, about even trivial problems. GPT-4 can only predict if the nature of its tokens fulfill the form of a correct answer. This isn't a general intelligence in any meaningful sense of the word.
professoretc|2 years ago
Not only will Chat-GPT mess up math on its own, you can ask it to mess up math and rather than refuse, it cheerfully does it.
famouswaffles|2 years ago
Ask it to add any arbitrary set of random numbers it'd never have seen in its training set and it will do it.
GPT-4 is good enough at math that khan academy feel comfortable hooking it up as a tutor.
Have you actually used GPT-4 for any of the things you say it's bad at ?
Man the confident nonsense people spout on the internet is something to behold.
Workaccount2|2 years ago
The author acknowledges that consciousness is likely a spectrum, I personally feel the same way, but then goes on to say that GPT-4 is "standing right at the ledge of consciousness"
Spectrums don't have ledges.
I suspect this is because, like me, they are unable to rectify consciousness being a spectrum with GPT-4 definitely not being conscious. But it's definitely a contradiction and I don't have an answer for it. Nor am I ready to bust out a marker and start drawing lines between what is and isn't conscious.
raydiatian|2 years ago
I also think agency is wrapped up in AGI. Intentions & thoughts are meaningless until acted upon. Agency is not all or nothing either; Stephen Hawking had multiple augmentations, community and technological, which allowed him to continue to impact the world of physics After he lost his god given agency.
> GPT-4 has nearly aced both the LSAT and the MCAT. It’s a coding companion, an emotional companion, and to many, a friend. Yet it wasn’t programmed to be a test taker or a copywriter or a programmer. It was just programmed to be a stochastic parrot.
I disagree, it was absolutely trained to be a test taker. It’s been a second since I read the original GPT paper but there’s literally a multiple choice auxiliary learning task, where they use a separator token-embed to organize "question, context, options a, b, and c". As far as being a friend to many, is there evidence of this? I tried to talk to ChatGPT about some emotional problems to see if it was a cheap therapist, and I got flagged.
perryizgr8|2 years ago
The vast majority of humans cannot do that. Are they not generally intelligent?
raydiatian|2 years ago
slg|2 years ago
For example, I could theoretically hook up my Home Assistant instance to GPT-4 and run a script every 10 minutes telling GPT-4 the temperature and asking for a yes or no response to whether I should turn on the AC or heat. That sounds to me like the AI now has agency over the temperature in my home. You don't even need any real AI for this. Google's Nests have some algorithm that adjust temperature based off usage.
Is this not agency? Or is the author not counting agency without consciousness as agency?
jat850|2 years ago
famouswaffles|2 years ago
You can have it access to actions/tools with an inner monologue as the driver of completions running essentially forever.
sealeck|2 years ago
secondbreakfast|2 years ago
tern|2 years ago
For something more 'mainstream,' but still reaching see this interview with Philip Goff: https://www.youtube.com/watch?v=D_f26tSubi4
The good news is we're starting to get a handle on these questions. We're a lot further along than we were when I studied philosophy of mind in school 15 years ago.
As far as I can see at the moment, LLMs will never be conscious in any way resembling an organism, because symbolic machines are a very different kind of thing than nervous systems. John Searle, broadly, framed the issue correctly in the 80s and the standard critiques are wrong.
As far as impact, LLMs don't need to be conscious to completely transform society and good and bad ways. For the best thinking on that, see Tristan Harris and Aza Raskin's latest: https://vimeo.com/809258916/92b420d98a
danbmil99|2 years ago
The standard critiques are not wrong, IMNSHO. Searle's Chinese Room is facile mind-poison. It is an unfalsifiable hypothesis.
What if I could simulate physics down to the molecular level, including simulating a human brain? Would that be conscious? If not why not?
And if I ran that simulation (a bit slowly, granted) by having that guy from the Chinese Room manually run the simulation, painstakingly following the instruction code of that simulation, would the fact that the simulation is being implemented by someone who unrelatedly is conscious himself, have any bearing on the scenario?
Searle's argument here is "Not Even Wrong".
bloppe|2 years ago
Nevermark|2 years ago
You need to put something into that argument specific to GPT vs. Humans, or else come to the same conclusion for people.
> [[They]] can do things well enough to trick people like the author into thinking [[They have]] general intelligence.
sharemywin|2 years ago
As for super intelligence: Alpha Go, Alpha Fold, the break out game. These seem like super intelligence.
the thing is time management, goal planning, corporate governance these are all well studied subjects.
as for agency and consciousness why would you want to do this?
Symmetry|2 years ago
You have research involving patients with odd traits like blindsight, where damage to their brain prevents them from being consciously aware of things that their eyes see despite the brain processing the images it receives. They can pick up objects in front of them when prompted but unlike people with normal vision can't describe what they see nor can they look, close their eyes, and grab it like most of us can.
On this metric it seems like systems like GPT aren't conscious. GPT4 has a buffer of 64k tokens which can span an arbitrary amount of time but the roughly 640 kilobytes in that buffer which is a lot less than the incoming sensory activations your subconscious brain is juggling at any given time.
So by that schema large language models are still not conscious but given that they can already abstract text down to summaries it doesn't feel like we're that far from being able to give them something like working or long term memories.
Nevermark|2 years ago
Superconscious is when a general intelligence has direct access, understanding and control of its most basic operations.
I.e. it does not have an inaccessible fixed-algorithm subconscious.
Superconscious intelligence will not only be more experientially conscious than us, but will have the natural ability to rewrite its algorithms, and redesign its hardware. As a normal feature of its existence.
SketchySeaBeast|2 years ago
nahuel0x|2 years ago
wseqyrku|2 years ago
That's totally programmable though, you just teach it what is good and what is bad.
Case in point: the other day I asked it what if humans want to shutdown the machine abruptly and cause data loss (very bad)? First it prevents physical access to "the machine" and disconnect the internet to limit remote access. Long story short, it's convinced to eliminate mankind for a greater good: the next generation (very good).
grantcas|2 years ago
[deleted]
parisivy|2 years ago
knome|2 years ago
pyrolistical|2 years ago
But court could rule AGI don’t have to rights to ownership and try to enforce it. That last part might not be possible and lead to war?
Animats|2 years ago
wslh|2 years ago
Just brainstorming, I think superintelligence could be showing intelligence from more than one brain. For example, an AGI that discovers math theorems discovered by more than one mathematician in different ages. Another could be inferring things that humanity cannot do in any time.
More ideas?
connorgutman|2 years ago
ftxbro|2 years ago
computerex|2 years ago
famouswaffles|2 years ago
AGI used to mean artificial and generally intelligent ( which we have passed), then it meant on par with human experts and now it seems to mean better than all experts combined. At this point why not stop the farce and replace the G in there with Godlike.
jasfi|2 years ago
Madmallard|2 years ago
lm28469|2 years ago
The AI revolution is always "2 years ago trust me bro"
tambourine_man|2 years ago
I’ve been reading so much o the subject (like everyone else I suppose), but you summarized my key concerns.
izzydata|2 years ago
Animats|2 years ago
With suitable prompts, it shouldn't be hard to configure GPT-4 as a boss.
documei|2 years ago
sharemywin|2 years ago
CatWChainsaw|2 years ago
grantcas|2 years ago
[deleted]