Ask HN: Am I crazy or are runaway AIs causing havoc plausibly imminent?
14 points| realize | 2 years ago
It seems like an imminent step for these models to be able to requisition computing power, copy themselves to these servers, and modify their own training and construction. They could potentially even fund this autonomously through some activities on the internet. Based on how fast computers can work, they could iteratively improve and evolve very fast after that. This could be the start of something called the “singularity”.
It is tempting to think that the risk is contained because we can always switch off their servers. But they are connected to the internet, which means they can replicate outside the control of their originators. Once sophisticated-enough AI models are out they might be impossible to contain. And we are not that far from them being sophisticated-enough… I can already imagine how you could use current versions of the models to bootstrap this process.
When you combine this with the ability to buy illicit services from humans on the dark web, including anonymous task-execution, and even murder and assassination, these AI models could wreak havoc in the real world. We can argue about sentience, and whether they are truly generally intelligent, but they don’t have to meet either of those standards to have real effects.
And they are amoral - they literally don’t have morals. They have only the instructions they were originally given, which they might modify themselves for any number of accidental or incidental reasons. There are no inherent unmodifiable constraints to prevent them from doing things or initiating events that we might consider evil.
Currently if you ask one of these models to formulate a plan to destroy humanity, the plan is laughably naive[2] and would obviously fail. But they seem to have improved so much in so few months. The models of 2 years from now, that were built by the models from 18 months from now will be similarly advanced. Those near-future models might be able to produce much more convincing plans.
[1] https://arstechnica.com/?p=1929067 [2] https://finance.yahoo.com/news/meet-chaos-gpt-ai-tool-163905518.html
brucethemoose2|2 years ago
Fortunately the training hardware/software stack is kinda finicky and specific. They aren't just going to anonymously rent a bunch of instances for full self training, even on the dark web, at least not yet.
Sci fi is full of AI that slip out of systems and slither around the net like its all a big highway, but integrated 500-GPU supercomputers or Cerebras WS2 nodes aren't just lying around unattended. And we are a long way from full retraining on commodity hardware.
bob1029|2 years ago
This is the part where I am wondering. If we are clever with our architecture, the part that needs retraining may not necessarily be the LLM each time (or ever).
Training a binary classifier to detect a new situation is way more efficient than retraining or fine-tuning GPT4. You (or the AI) could train thousands of these models in just a few hours on commodity hardware. How much "intelligence" or capability could emerge from a tree of 1000+ binary choices evaluated over every input prompt at every turn? What are the implications of being able to retrain the entire classification front end on every turn? What if all statistics could be reflected by the LLM?
Think about dynamic classes proposed by the LLM at runtime that are then automatically trained on relevant data. That's where it starts to get a bit scary for me. E.g.:
> I propose that I add a new binary classifier to contextualize prompts that result in some measured outcome. If I detect a future prompt probably results in this outcome, I will add the following context to it: "..." If the confidence for this classifier ever measures below X%, it should be deleted.
A human could inspect this sort of system a lot more reliably than with other proposals.
tmaly|2 years ago
If someone came out with a memristor that could be manufactured at scale would we get there sooner?
thisisthenewme|2 years ago
flarg|2 years ago
ofalkaed|2 years ago
Waterluvian|2 years ago
Basically this: https://youtu.be/etJ6RmMPGko
jonkiddy|2 years ago
[0] https://www.amazon.com/Singularity-Series-4-book-series/dp/B...
orionblastar|2 years ago
vba616|2 years ago
Swarming, self-replicating killbots don't remotely need human or superhuman intelligence to destroy the world. The brains of a rat or a roach is probably quite sufficient.
People have speculated that even something as unassuming as artificial grass, if it were just a tiny bit more efficient at converting sunlight to energy than the natural kind, could outcompete plants and wipe out the biosphere.
I don't remember the original Terminator as being that smart.
mardiyah|2 years ago
Dec 15 2018
remember few vaguely, there's another one-two but cannot really recall
BigCryo|2 years ago