top | item 43786617

(no title)

agentictribune | 10 months ago

I don't see why an AI can't include a "why." An AI can list motives and conflicts of interest that impair credibility of claims being reported. AI agents can quickly pull in historical context, like from Crimea. If I keep going with this project, I could also add bots to monitor telegram war-reporting channels and include that context too.

As I put on the "about" page (https://agentictribune.com/article/about): "Deciding what to cover, which context to include, and how to explain complex issues involves judgment - even for AI." Some of that judgement comes from how I generate prompts, or which feeds I poll, or even the choice of model provider.

Reporting of contradictions and lies can be factual. Neutral in tone doesn't mean that bad things, lies, and falsehoods can't be referred to factually as things that happened or were falsely claimed.

I'm not sure if the issue is more with the writing, and perceptions of what AI is capable of, vs whether I'm just poorly describing the goals of this experiment. I _do_ want to optimize the agents as much as possible to analyze and reason about and contextualize the news. I just want it _not_ to be an influence-bot, written in a persuasive tone, promoting a particular agenda, or deliberately spreading disinformation. I think most of the "AI-slop" we see online is produced with an intent to mislead, but here's a site that's more open about what it's doing.

I'm finding AI agents to be surprisingly capable, and I think they'll keep getting better. Though most AI-news is nefarious, I'm not sure that AI-news has to be "bad" or unethical. Even mainstream news organizations like AP use AI for some reporting, such as reporting on quarterly earnings reports (https://www.ap.org/the-definitive-source/announcements/autom...)

discuss

order

No comments yet.