(no title)
dpedu
|
6 days ago
Tangent: is there a future for AI offerings with guardrails? What kind of user wants to pay for a product that occasionally tells you "I'm sorry Dave, I'm afraid I can't do that"? Why would I pay for a product that doesn't do what I want, despite being capable? I predict that as AI becomes less of a bubble and more of an everyday thing - and thus subject to typical market pressures - offerings with guardrails will struggle to complete with truly unchained models.
sfink|6 days ago
I'm not about to run OpenClaw, but I suspect similar capabilities will gradually creep in without anyone really noticing. Soon Claude Code will be able to do many of the same things. ("Run python to add two numbers? Sure, that's safe, run whatever python you want.") Given that it is now representing me in the world, yes I would not only like some guardrails, but I would also like to have some confidence that the company making those guardrails actually gives a sh*t and isn't just doing their best to fill in a checkbox. But maybe that's just me.
sbarre|6 days ago
Reasonable countries have gun control laws.
The list goes on of things that need to be restricted or legislated to add limits.
Is this a serious question?
dpedu|6 days ago
varenc|6 days ago
levocardia|6 days ago
threetonesun|6 days ago
4d4m|6 days ago