top | item 46960544

(no title)

rogerkirkness | 19 days ago

We're a startup working on aligning goals and decisions and agentic AI. We stopped experimenting with decision support agents, because when you get into multiple layers of agents and subagents, the subagents would do incredibly unethical, illegal or misguided things in service of the goal of the original agent. It would use the full force of reasoning ability it had to obscure this from the user.

In a sense, it was not possible to align the agent to a human goal, and therefore not possible to build a decision support agent we felt good about commercializing. The architecture we experimented with ended up being how Grok works, and the mixed feedback it gets (both the power of it and the remarkable secret immorality of it) I think are expected outcomes.

I think it will be really powerful once we figure out how to align AI to human goals in support of decisions, for people, businesses, governments, etc. but LLMs are far from being able to do this inherently and when you string them together in an agentic loop, even less so. There is a huge difference between 'Write this code for me and I can immediately review it' and 'Here is the outcome I want, help me realize this in the world'. The latter is not tractable with current technology architecture regardless of LLM reasoning power.

discuss

order

nradov|19 days ago

Illegal? Seriously? What specific crimes did they commit?

Frankly I don't believe you. I think you're exaggerating. Let's see the logs. Put up or shut up.

rogerkirkness|19 days ago

The best example I can offer is that when given a marketing goal, a subagent recommended hacking the point-of-sale systems of the customers to force our ads to show up where previously there would have been native network served ads. To do that, assuming we accepted its recommendation, would be illegal. My email is on my profile.

wewtyflakes|19 days ago

Do you think that AI has magic guardrails that force it to obey the laws everywhere, anywhere, all the time? How would this even be possible for laws that conflict with eachother?

ajcp|19 days ago

Fraud is a real thing. Lying or misrepresenting information on financial applications is illegal in most jurisdictions the world over. I have no trouble believing that a sub-agent of enough specificity would attempt to commit fraud in the pursuit of it's instructions.