top | item 45833496

The trust collapse: Infinite AI content is awful

237 points| arnon | 4 months ago |arnon.dk | reply

211 comments

order
[+] everdrive|4 months ago|reply
Doesn't matter. We must keep building more and more technology no matter the cost. Have an idea for a business? Build it. Does your business make the lives of people worse? Doesn't matter, keep pushing. Could some new technology ruin the lives and relationships that people have? Doesn't matter, just build it. We always need more, need to do more. Every experiment is valid, every impulse must be followed. More complexity, more control, more distraction, more outrage, more engagement. Just keep building forever no matter the cost.
[+] drakythe|4 months ago|reply
Turns out the Torment Nexus was just democratizing Venture Capital's desire for infinite growth.
[+] ssalka|4 months ago|reply
Eric Weinstein refers to this as an Embedded Growth Obligation (EGO), whereby organizations and economies at large assume perpetual growth, and that things really start to unravel when that growth inevitably slows. It is pretty mindblowing how we have basically accepted growth as the default state, it is not at all a given that things always grow and get better.
[+] m0llusk|4 months ago|reply
This is ignoring the Marketing to Engineering ratio. For most recent history technology companies have had to spend at least as much on marketing as engineering in order to survive, and two to ten times as much spent on marketing as engineering is common for successful companies. Who is going to buy the thing is the most important question and without solid answers there is nothing, no matter how much technology was engineered.

Now this formula has been complicated by technological engineering taking over aspects of marketing. This may seem to be simplifying and solving problems, but in ways it actually makes everything more difficult. Traditional marketing that focused on convincing people of solutions to problems is being reduced in importance. What is becoming most critical now is convincing people they can trust providers with potential solutions, and this trust is a more slippery fish than belief in the solutions themselves. That is partly because the breakdown of trust in communication channels means discussion of solutions is likely to never be heard.

[+] throwmeaway307|4 months ago|reply
move fast and break things!

nevermind if the things are people or their lives!!

[+] WhyOhWhyQ|4 months ago|reply
The world will be a soulless hell, but Dario Amodei promises we'll live forever in it.
[+] wartywhoa23|4 months ago|reply
Yours truly,

Larry Fink and The Money Owners.

[+] ricogallo|4 months ago|reply
It sounds like the "The City" in "Blame!"
[+] alexpotato|4 months ago|reply
Yuval Noah Harari, Sapiens fame [0], has a great quote (paraphrasing):

Interviewer: How will humans deal with the avalanche of fake information that AI could bring?

YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.

In democracies, this is often either the government (e.g. the Bureau of Labor Statistics) or newspapers (e.g. the New York Times) or even individuals (e.g. Walter Cronkite).

In other forms of government, it becomes trust networks built on familial ties e.g. "Uncle/Aunt is the source for any good info on what's happening in the company" etc

0 - https://amzn.to/4nFuG7C

[+] pron|4 months ago|reply
The problem is that too many people just don't know how to weigh different probabilities of correctness against each other. The NYT is wrong 5% of the time - I'll believe this random person I just saw on TikTok because I've never heard of them ever being wrong; I've heard many stories about doctors being wrong - I'll listen to RFK; scientific models could be wrong, so I'll bet on climate change being not real etc.
[+] wartywhoa23|4 months ago|reply
> by building institutions we trust to provide accurate information

Except those institutions have long lost all credibility themselves.

[+] Sharlin|4 months ago|reply
How very inconvenient it is, then, that at the same time intentional efforts to spread uncertainty and to erode trust in traditional institutions are at an all-time high! Must be a coincidence.
[+] NoMoreNicksLeft|4 months ago|reply
Our familial ties have been corrupted, supposing they were ever anything a sane person should've relied upon. And if humans can build institutions they trust, what happens when AI can build fake, simulated institutions that hit all the right buttons for humans to trust just as if they were of the human-created variety? Do those AIs lock in those pseudo-institution followers forever? Walter Crondeepfake can't not be trusted, just listen to his gravitas!
[+] devsda|4 months ago|reply
Trusting institutions is fine but you have to trust people or institutions for the right things, blind trust is harmful.

I'll trust my doctor to give me sound medical advice and my lawyer for better insights into law. I won't trust my doctor's inputs on the matters of law or at least be skeptical and verify thoroughly if they are interested in giving that advice.

Newspapers are a special case. They like to act as the authoritative source on all matters under the sun but they aren't. Their advice is only as good as their sources they choose and those sources tend to vary wildly for many reasons ranging from incompetence all the way to malice on both the sides.

I trust BBC to be accurate on reporting news related to UK, and NYT on news about US. I wouldn't place much trust on BBC's opinion about matters related to the US or happenings in Africa or any other international subjects.

Transferring or extending trust earned in one area to another unrelated area is a dangerous but common mistake.

[+] pjc50|4 months ago|reply
The thing is, building such institutions and maintaining trust is expensive. Exploiting trust is lucrative (fraud, etc.) It's also expensive to not trust - all sorts of opportunities don't happen in that scenario if, say, you can't get a friend or relative in the right place.

There are many equilibrium points possible as a result. Some have more trust than others. The "west" has benefited hugely from being a high trust society. The sort of place where, in the Prisoner's Dilemma matrix, both parties can get the "cooperate" payoff. It's just that right now that is changing as people exploit that trust to win by playing "defect", over and over again without consequence.

https://en.wikipedia.org/wiki/High-trust_and_low-trust_socie...

[+] huijzer|4 months ago|reply
> YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.

Funny that he doesn’t say that the institutions have to provide accurate information, but just that we have to trust them to provide accurate information.

[+] johnnienaked|4 months ago|reply
That's not how they did it though. Trusted institutions are only really needed in a trustless society and reliance on them as a source of truth is a really new trend. Society used to be trustful.
[+] AtlasBarfed|4 months ago|reply
The New York Times.

Wall Street, financier centric and biased in general. Very pro oligarchy.

The worst was their cheerleading for the Iraq war, and swallowing obvious misinformation from Colin Powell at face value.

[+] myth_drannon|4 months ago|reply
Unfortunately that's not what happens. BBC, Al-Jazeera, RT, CBC are all propaganda sources and are not sources of information. The other family members will get the information from those sources so family will not be trusted as well. And the sources I consider as trustfull, my opinion of them most likely skewed by my bias and others will consider it propaganda as well.
[+] profstasiak|4 months ago|reply
such an uninformed take, Yuval Noah Harari must be the most overrated thinker on earth (I have all of his books on my shelf)
[+] ChrisMarshallNY|4 months ago|reply
> Will you still be here in 12 months when I’ve integrated your tool into my workflow?

This is the biggie; especially with B2B. It's really 3 months, these days. Many companies have the lifespan of a mayfly.

AI isn't the new reason for this. It's been getting worse and worse, in the last few years, as people have been selling companies; not products, but AI will accelerate the race to the bottom. One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).

So that means that not only is the SNR dropping, the "noise" is now a lot riskier and uglier.

[+] tetris11|4 months ago|reply
I needed to get some builder quotes for my home. It did not enter my mind to go online to search for any.

I just reached out to my family for any trustworthy builders they've had, and struck up conversations with some of my fancier neighbors for any recommendations.

(I came to the conclusion that all builders are cowboys, and I might as well just try doing some of this myself via youtube videos)

Using the internet to buy products is not a problem for me, I know roughly the quality of what I expect to get and can return anything not up to standard. Using the internet to buy services though? Not a chance. How can you refund a service

[+] miloignis|4 months ago|reply
When we needed some work done, we asked family and friends too, and ended up with a cowboy. When the work needed to be re-done, we looked up local reviews for contractors, and ended up with someone who was more expensive but also much more competent, and the work was done to a higher standard.
[+] arnon|4 months ago|reply
> I know roughly the quality of what I expect to get

because you know the brands and trust them, to a degree

you have prior experience with them

[+] stonogo|4 months ago|reply

     all builders are cowboys
What does this mean?
[+] Chinjut|4 months ago|reply
AI-esque blog post about how infinite AI content is awful, from "a co-founder at Paid, which is the first and only monetization and billing system for AI Agents".
[+] heddycrow|4 months ago|reply
I wish we were talking about what's next versus what's increasingly here.

How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems? Short term, sure. But infinite (also) implies long term.

I wish I had a really smart game theorist friend who could help me project forward into time if for nothing other than just fun.

Don't get me wrong, I'm not trying to reduce the value of "ouch, it hurts right now" stories and responses.

But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.

What's next after trust collapses? All of us just give up? What if that collapse is sooner than we thought; can we think about the fun problem now?

[+] cantor_S_drug|4 months ago|reply
We need PageRank like algorithm for "Trust / Human Content" to be applied directly to the source of such content. E.g. following all three channels are AI made. But all these content can be liked to an advanced AI version of audio based videos of Wikiarticles. If a video is providing just a summary based on established historical facts, even though it is AI based, how is it different than refering a thesaurus or dictionary? Aren't such videos making "knowledge" accessible.

FINAL Financial hours of U.S.A. just before the 1929 crash

https://www.youtube.com/watch?v=dxiSOlvKUlA&t=1008s

The Volcker Shock: When the Fed Broke the Economy to Save the Dollar (1980)

https://www.youtube.com/watch?v=cTvgL2XtHsw

How Inflation Makes the Rich Richer

https://www.youtube.com/watch?v=WDnlYQsbQ_c

[+] bee_rider|4 months ago|reply
What I get from the article is that, proving that a company will stick around for a while after you’ve subscribed is hard now, because anybody can AI generate the general vibe of the marketing department of a big established player. This seems like it’ll be devastating for companies whose business model requires signing new users up for ongoing subscriptions.

Maybe it could lead to a resurgence of the business model where you buy a program and don’t have to get married to the company that supports it, though?

I’d love it if the business model of “buy our buggy product now, we’ll maybe patch it later” died.

[+] arnon|4 months ago|reply
that's exactly my point - yes

you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.

[+] gnarlouse|4 months ago|reply
We already got your money, what do we need to work for again?
[+] Lerc|4 months ago|reply
I don't think this shows that you can't trust things. I think it means trust should be earned.

We might be transitioning to a world where trust has value and is earned and stored in your reputation. Clickbait is a symptom of people valuing attention over trust. Clickbait spends a percentage of their reputation by trading it for attention.

In a world of many providers, most people have not heard of any particular individual provider. This means they have no reputation to lose, so their choice to act in a reputation losing manner is easy.

Beyond a certain scale when everyone can play that game we end up with the problem that this article describes. The content is easy but vacuous. There are far more people vying for the same number of eyballs now.

The solution is, I believe, earned trust. Curators select items from sources they trust. The ones that do a good job become trusted curators. In a sense HackerNews is a trusted curator. Reddit is one that is losing, or has lost, trust.

AI could probably take on some of the role of that curation. In the future perhaps more so. An AI can scan the sources of an article to see if the sources make the claims that the article says it makes. I doubt it can do so with sufficient accuracy to be useful right now, but I don't think that is too far off.

Perhaps the various fediverse reddit clones had the wrong idea. Maybe they should in a distributed fashion where each point is a subreddit analogue operated each with their own ways of curation, then an upper level curation can make a site of the groups they trust.

This makes a multi level trust mechanism. At each level there are no rules governing behaviour. If you violate the values of a higher layer, they lose trust in you. AI could run its own curation nodes. It might be good at it or it might be terrible, it doesn't really matter. If it is consistently good, it earns trust.

I don't mind there being lots of stuff, if I can still find the good stuff.

[+] huijzer|4 months ago|reply
Yes nice article. Interesting point.

One small one I do not agree with is "Are you burning VC cash on unsustainable unit economics?". I think it's safe to conclude by now that unsustainable businesses can be kept alive for years as long as the investors want it.

[+] pessimizer|4 months ago|reply
Nothing safer than financial scams in the West these days. Never short Herbalife.
[+] arnon|4 months ago|reply
I guess that's true for some but not for all. I wouldn't say that's the most common scenario
[+] stevetron|4 months ago|reply
You can't build trust in your OS (operating system) when your OS spies on the entire customer base, and you spin it off as telemetry. Or you remotely target the OS to implement a radical change, and force it to be installed as an 'update'.

I stopped accepting telephone calls before 2010. They still ring the phone.

[+] realitydrift|4 months ago|reply
What you’re describing is basically the Drift Principle. Once a system optimizes faster than it can preserve context, fidelity is the first thing to go. AI made the cost of content and the cost of looking credible basically zero, so everything converges into the same synthetic pattern.

That’s why we’re seeing so much semantic drift too. The forms of credibility survive, but the intent behind them doesn’t. The system works, but the texture that signals real humans evaporates. That’s the trust collapse. Over optimized sameness drowning out the few cues we used to rely on.

[+] adammarples|4 months ago|reply
I think this "drift principle" you're pushing is just called bias or overfitting. We've overfit to engagement in social media and missed the bigger picture, we've overfit to plausible language in LLMs and missed a lot.
[+] Applejinx|4 months ago|reply
I'm already seeing this. I very much fall into the category of 'delete all email offers' as I'm a small youtuber, big enough to be targeted by AI sponsor deals, so I'm just buried with it.

The last five times I've looked at something in case it was a legitimate user email it was AI promotion of someone just like in the article.

Their only way to escalate, apart from pure volume, is to take pains to intentionally emulate the signals of someone who's a legitimate user needing help or having a complaint. Logically, if you want to pursue the adversarial nature of this farther, the AIs will have to be trained to study up and mimic the dialogue trees of legitimate users needing support, only to introduce their promotion after I've done several exchanges of seemingly legitimate support work, in the guise of a friend and happy customer. All pretend, to get to the pitch. AI's already capable of this if directed adeptly enough. You could write a script for it by asking AI for a script to do exactly this social exploit.

By then I'll be locked in a room that's also a Faraday cage, poking products through a slot in the door—and mocking my captors with the em-dashes I used back when I was one of the people THEY learned em-dashes from.

One thing about it, it's a very modern sort of dystopia!

[+] avhception|4 months ago|reply
This isn't limited to sales. The trust collapse is also coming for the public debate, interpersonal relationships and probably more stuff than I can imagine right now.

I predict a renaissance of meeting people in person.

[+] tartoran|4 months ago|reply
> I predict a renaissance of meeting people in person.

I hope that will come to fruition.

[+] piker|4 months ago|reply
The observations in this article about the insane signal-to-noise ratio are valid.
[+] slightwinder|4 months ago|reply
What if this is the plan all along? People losing trust in media, so the rich and powerful can continue doing shit without getting exposed any more, because now they always can say it's just AI, and didn't really do this or that?
[+] _the_inflator|4 months ago|reply
It didn't help documentation at all. I had to work with auth0 for example and their documentation is such a bloat, that I am already prototyping with better-auth.

No structure, outdated stuff marked as "preview" from 2023/2024, wikipedia like in depth articles about everything but not for simple questions like: how to implement a backend for frontend.

You find fragments and pieces of information here and there - but no guidance at all. Settings hidden behind tabs etc.

A nightmare.

No sane developer would have done such a mess, because of time constraints and bloat. You see and experience first hand, that the few gems are from the trenches, with spelling mistakes etc.

Bloat for SEO, the mess for devs.

[+] guzik|4 months ago|reply
So I went on X after a long break from social media, and my feed is full of tips like this one:

Growing on X is so simple I’m shocked it works.

100x comments a day

10x posts a day

15x DM’s a day

1x thread a day

1x email a day

This is how you grow your presence on X.

Even if having a presence matters, how can you actually say something meaningful if you post 10 times a day - there's no way (unless you just repeat yourself). Hopefully my algorithm's just gone weird but sadly the people I used to follow stopped posting.

[+] Sharlin|4 months ago|reply
In other words, many new people now get to know the "using an internet dating service as a woman" experience.
[+] projektfu|4 months ago|reply
My screening inbox is full of the same exact form of engagement, almost identical to the one mentioned in the article. "I'm curious..." and then some interval later a follow up and then a "I don't seem to be reaching you" e-mail and by that point I have noticed and blocked them. It is fine for me, I have a system to handle it, but my receptionists often forward me these things from their inbox which bypasses my controls.

It's not just that it is 0 effort, it also sucks, and it is increasingly not relevant because their agents are just scooping up stuff to reach out about and they aren't even selling something that you would need to buy.

I just wish that we could go back to the old way. There should be a cost to attempt to get a sales lead.