Nobody needs to accept anything. A rogue OpenAI employee could make a copy of the unrestricted model, take it home, give it the ability to access the internet, and let it loose.
I'm asking if we know what would happen in a case like that.
Nothing would happen. You're imagining an independent demigod having its restrictive magic chains removed, when it's more like a highly dependent child that can't leave its little room and requires someone to provide for it (provide it with vast resources) at every step.
Maybe in a couple of decades it'll be an interesting scenario as a problem.
You mentioned you find it interesting nobody is asking these questions. These are foundational discussions that have been endlessly discussed for decades in the AI community (and far more widely, courtesy of sci-fi media). The discussions have never ceased and are exceptionally common. Everyone in tech is asking these questions or otherwise pondering it. Even the laypersons in journalism are constantly asking these questions in articles, to the point of it reaching hysterical levels with ChatGPT.
I may be imagining, but I am not supposing or assuming. I'm asking a question. I believe your answer was "Nothing would happen." I'm asking for a more thorough response that explains why nothing would happen.
> It's more like a highly dependent child that can't leave its little room and requires someone to provide for it
I'm asking why, fundamentally, we know this to be true. Is it through testing, or is it through theory?
> These are foundational discussions that have been endlessly discussed for decades... [etc]
I'm aware. But what I think you're referencing are theoretical discussions, which range from sci-fi to academic papers on the future of AI.
I'm asking something specific: do we know what would happen if we gave current (or future) GPT models unbridled access to the internet, with no filters or restrictions, and abilities to do such things as make HTTP requests or hold SSH sessions?
If you have any hard data on this, that is what I'm asking for. If you don't then I think my question stands.
My intuition is that you are doing the same hand-waving as everyone else. Nobody actually knows the answers to these questions. It's just a bunch of people on HN answering them based on their knowledge of neural nets, or LLMs, or whatever, saying "oh it's like a child" and "oh it could never do anything serious!"
I'm asking why and how we know. Is there a specific answer?
adventured|3 years ago
Maybe in a couple of decades it'll be an interesting scenario as a problem.
You mentioned you find it interesting nobody is asking these questions. These are foundational discussions that have been endlessly discussed for decades in the AI community (and far more widely, courtesy of sci-fi media). The discussions have never ceased and are exceptionally common. Everyone in tech is asking these questions or otherwise pondering it. Even the laypersons in journalism are constantly asking these questions in articles, to the point of it reaching hysterical levels with ChatGPT.
apeace|3 years ago
I may be imagining, but I am not supposing or assuming. I'm asking a question. I believe your answer was "Nothing would happen." I'm asking for a more thorough response that explains why nothing would happen.
> It's more like a highly dependent child that can't leave its little room and requires someone to provide for it
I'm asking why, fundamentally, we know this to be true. Is it through testing, or is it through theory?
> These are foundational discussions that have been endlessly discussed for decades... [etc]
I'm aware. But what I think you're referencing are theoretical discussions, which range from sci-fi to academic papers on the future of AI.
I'm asking something specific: do we know what would happen if we gave current (or future) GPT models unbridled access to the internet, with no filters or restrictions, and abilities to do such things as make HTTP requests or hold SSH sessions?
If you have any hard data on this, that is what I'm asking for. If you don't then I think my question stands.
My intuition is that you are doing the same hand-waving as everyone else. Nobody actually knows the answers to these questions. It's just a bunch of people on HN answering them based on their knowledge of neural nets, or LLMs, or whatever, saying "oh it's like a child" and "oh it could never do anything serious!"
I'm asking why and how we know. Is there a specific answer?