top | item 45756741

(no title)

dazzaji | 4 months ago

This discussion hits close to home. A few of us at Stanford and Consumer Reports have been working on a project called Loyal Agents (loyalagents.org ) that’s focused on the same core issue raised in the Economist article, namely how to make sure AI agents actually act in the interest of the people they represent.

The idea is to define what “loyalty” means for an AI agent in both technical and legal terms, and then build systems that can prove they’re acting on a user’s behalf (ie not a platform’s or advertiser’s).

It’s early-stage research, but the overlap with many of the questions here is striking. Would be great to get feedback from this crowd as the work evolves.

I’m part of the group working on Loyal Agents and happy to discuss it.

discuss

order

laughingcurve|4 months ago

I am a researcher in this field and and would love to talk more about loyal agents

dazzaji|4 months ago

By all means! I’m not sure if Hacker News rules or norms permit us to talk here or not but I’ll at least respond here as a start:

What about loyal agents would you like to talk about?