top | item 47155108

(no title)

hamburglar | 4 days ago

If the “clawness” means you only use the llm to control itself, then yes, that’s impossible. But you can easily shim such a process so that the interfaces it uses to “claw out” to the real world are shims that have safeties such as human control. Openclaw does not do this, and is thus a scary shit show, but you can play with it in isolation safely, and I think a standard pattern for good control will emerge.

discuss

order

yencabulator|4 days ago

> easily

Yeah that's an active research topic for teams of PhDs, including some of Google's brightest. And the current approach even with added barriers may just be fundamentally untrustable. Read the links from my earlier comment for background.

hamburglar|4 days ago

If the shim doesn’t use an LLM to make its decisions this is not a problem.

If the shim does use an LLM but no uncontrolled data is allowed in, this is not a problem.