The apparent trend of technological development, and I’m open to counter examples, is that a technology’s capacity for good seems to rise along with its capacity for harm.
“Technology” is what we call it when we devise ways to mutate our world more effectively. AI is a promising technology because it promises to dramatically mutate our world. Why should we believe that this power is only wieldable in the “positive” direction? In answering this, can you please also articulate what direction is “positive?”
The in-home virus labs, for example, could certainly be used for writing viruses that make humans genetically immune to all sorts of nasty things. Powerful! Do you just accept my assertion that this is what that technology will actually be used for? I suspect not.
You already have an in-home mustard gas lab, plus the few thousand other noxious chemicals you can make my mixing stuff in kitty litter. I get what you're saying, but nobody would ever install an in-home virus lab because it's basically a threat-vector with no palpable upsides, even without AI. Maybe an in-home 3D printer is a better analogy, since the worst thing it can do is print something profane or offensive.
> Why should we believe that this power is only wieldable in the “positive” direction? In answering this, can you please also articulate what direction is “positive?”
For starters AI doesn't do much of anything. Like any other system, you have to connect it to different components for it to design the human flamethrower or supervirus you're imagining. An AI concocting supervirus formulae and flamethrower blueprints isn't inherently positive or negative - giving AI the capacity to act on those ideas is. It really is as simple as "imagine the liabilities beforehand" with our current systems. If you're not prepared to handle the worst-case disaster scenario, you probably shouldn't let AI do it.
At least, that's where I stand on it. I'm a pragmatist about it, I don't think AI's progress is immediately threatening. Come throw rocks at me when I'm wrong in 5 years or whatever, but I'm not sold on the Matrix-style human battery future everyone seems to be resigned to.
The difference between software trained on the corpus of human knowledge and a virus lab which creates dangerous pandemic-causing chemicals is known as nuance.
ethanbond|2 years ago
“Technology” is what we call it when we devise ways to mutate our world more effectively. AI is a promising technology because it promises to dramatically mutate our world. Why should we believe that this power is only wieldable in the “positive” direction? In answering this, can you please also articulate what direction is “positive?”
The in-home virus labs, for example, could certainly be used for writing viruses that make humans genetically immune to all sorts of nasty things. Powerful! Do you just accept my assertion that this is what that technology will actually be used for? I suspect not.
smoldesu|2 years ago
You already have an in-home mustard gas lab, plus the few thousand other noxious chemicals you can make my mixing stuff in kitty litter. I get what you're saying, but nobody would ever install an in-home virus lab because it's basically a threat-vector with no palpable upsides, even without AI. Maybe an in-home 3D printer is a better analogy, since the worst thing it can do is print something profane or offensive.
> Why should we believe that this power is only wieldable in the “positive” direction? In answering this, can you please also articulate what direction is “positive?”
For starters AI doesn't do much of anything. Like any other system, you have to connect it to different components for it to design the human flamethrower or supervirus you're imagining. An AI concocting supervirus formulae and flamethrower blueprints isn't inherently positive or negative - giving AI the capacity to act on those ideas is. It really is as simple as "imagine the liabilities beforehand" with our current systems. If you're not prepared to handle the worst-case disaster scenario, you probably shouldn't let AI do it.
At least, that's where I stand on it. I'm a pragmatist about it, I don't think AI's progress is immediately threatening. Come throw rocks at me when I'm wrong in 5 years or whatever, but I'm not sold on the Matrix-style human battery future everyone seems to be resigned to.
soulofmischief|2 years ago