(no title)
dinp | 11 days ago
- have bold, strong beliefs about how ai is going to evolve
- implicitly assume it's practically guaranteed
- discussions start with this baseline now
About slow take off, fast take off, agi, job loss, curing cancer... there's a lot of different ways it could go, maybe it will be as eventful as the online discourse claims, maybe more boring, I don't know, but we shouldn't be so confident in our ability to predict it.
zozbot234|11 days ago
If we want to avoid similar episodes in the future, we don't really need bots that are even more aligned to normative human morality and ethics: we need bots that are less likely to get things seriously wrong!
hunterpayne|11 days ago
pixl97|11 days ago
Of course having an AI that is a non-humanlike intelligence is it's own set of risks.
Shit's hard :/
avaer|11 days ago
Between these models egging people on to suicide, straightforward jailbreaks, and now damage caused by what seems to be a pretty trivial set of instructions running in a loop, I have no idea what AI safety research at these companies is actually doing.
I don't think their definition of "safety" involves protecting anything but their bottom line.
The tragedy is that you won't hear from the people who are actually concerned about this and refuse to release dangerous things into the world, because they aren't raising a billion dollars.
I'm not arguing for stricter controls -- if anything I think models should be completely uncensored; the law needs to get with the times and severely punish the operators of AI for what their AI does.
What bothers me is that the push for AI safety is really just a ruse for companies like OpenAI to ID you and exercise control over what you do with their product.
stevage|11 days ago
pixl97|11 days ago
If you looked at AI safety before the days of LLMs you'd have realized that AI safety is hard. Like really really hard.
>the operators of AI for what their AI does.
This is like saying that you should punish a company after it dumps plutonium in your yard ruining it for the next million years after everyone warned them it was going to leak. Being reactionary to dangerous events is not an intelligent plan of action.
c22|11 days ago
Not sure this implementation received all those safety guardrails.
[0]: https://en.wikipedia.org/wiki/OpenClaw
georgemcbay|11 days ago
laurentiurad|11 days ago
jacquesm|11 days ago
What do you base this on?
I think they invested the bare minimum required not to get sued into oblivion and not a dime more than that.
themanmaran|11 days ago
https://arxiv.org/abs/2501.18837
https://arxiv.org/abs/2412.14093
https://transformer-circuits.pub/2025/introspection/index.ht...
srdjanr|11 days ago
Regarding predicting the future (in general, but also around AI), I'm not sure why would anyone think anything is certain, or why would you trust anyone who thinks that.
Humanity is a complex system which doesn't always have predictable output given some input (like AI advancing). And here even the input is very uncertain (we may reach "AGI" in 2 years or in 100).
j2kun|11 days ago
overgard|11 days ago
unknown|11 days ago
[deleted]
unknown|11 days ago
[deleted]
jcgrillo|11 days ago
Philpax|11 days ago
mrsmrtss|11 days ago
pixl97|11 days ago
Legalize recreational plutonium!
eshaham78|11 days ago
[deleted]