top | item 47171493

(no title)

itmitica | 3 days ago

... or, maybe, simply use another agent to audit it?

Agent teams sound better than literature-induced confidence.

Are you anthropomorphizing when you should just automate the review?

discuss

order

zotimer|3 days ago

The readme covers this question and a lot more and the repo includes all the materials I used.

This is partly questioning the way we do alignment. The 4.6 base persona actually gives me worse results than when I append Daneel to the system prompts.

It's really not about anthropomorphizing or inducing confidence, it's about keying into the right "culture" in the training data.

You can check out this study (mentioned in the readme) about how posing the same question in English and Chinese to the same LLM results in wildly different assessments of why a project failed:

https://techxplore.com/news/2025-07-llms-display-cultural-te...

https://mitsloan.mit.edu/ideas-made-to-matter/generative-ai-...

itmitica|3 days ago

Again, you are building an audit agent.

You just use some theater around it.

For the purpose, the best audit agent is a completely different agent, not a different persona of the same agent.