This is why my friends and I are setting up a mesh network in our town.
The open internet has been going downhill for a while, but LLMs are absolutely accelerating it's demise. I was in denial for the last few years but at this point I've accepted that the internet I grew up on as a kid in the late 90s to mid 2000s is dead. I am grateful for having experienced it but the time has come to move on.
The future for people that valued what the early internet provided is local, trusted networks in my opinion. It's sad that we need to retreat into exclusionary circles but there are too many people interested in making a buck on the race to the bottom.
This seems like solving the problem at the wrong layer? The issue isn’t the actual network connection between people, it is the content. You could easily create your own forum or something and only include people you trust. You don’t need an entirely separate internet.
I've been looking into building some sort of Wireguard mesh service since many of my friends are distributed all across the world. I wish you the very best in your endeavours!
I think the same thing, came to the same conclusion, and started working on a solution a few months back. It's getting there, I'm just trying to polish up an mp3 player at the moment based on the network, and then I have quite a few plans. Still early days, still very buggy, and I am yet to really announce it, but I'm optimistic that something like this could help a lot. https://github.com/mjdave/katipo
Reminds me of a project idea I had. You'd get a little Raspberry Pi style board with BTLE and battery power (ideally lasting for weeks at a time) and covertly stick it in some communal location, e.g. a cafe or library. Then you'd have it run some local-only forum software and disseminate instructions for connecting to it. The point would be to have a digital community accessible only by direct connection and bound to a physical location by design, kind of in the vein of Community Memory.
It's probably too impractical to work as described, but I think that having a digital space constrained by physical access would be meaningful in a way that internet communities are not. The people you chat with would necessarily be the people in your physical environment, which would make it feel more like a local hangout than the typically vapid social media exchange.
(On further reflection, it would probably be easier to make a mesh network app version of this. Hmm...)
This is a cool idea and sounds like a fun project. That said, I imagine you could accomplish roughly the same thing with an invite only Wireguard network, with the benefit of not being geo-locked.
It is good to see there are some internet rebels left.
Perhaps AI-Skynet will not win - but they have a lot of money. I think we need to defund those big corporations that push AI onto everyone and worsen our lives.
Tangentially related, I have a hunch, but cannot prove, that prediction markets are the driving force behind a lot of the bad information online, since they essentially monetarily incentivize making people misjudge the state of the world.
There's been a huge uptick in this sort of brigade like behavior around current events. First noted it around LK99, that failed room temperature semiconductor in 2023, but it just keeps happening.
Used to be we only saw it around elections and crypto pump and dumps, now it's cropping up in the weirdest places.
That seems really high effort. I assume most events are things which are hard to influence, so at best you are hoping to tilt the wager odds into your favor. Which could backfire if you are betting on the wrong outcome.
How do you misjudge the world based on web articles? If you don't have proper foundations where to source your information from you are already doomed.
Interesting theory. I'm inclined to disagree, however. Prediction markets essentially allows people to trade information for money, even the types historically more difficult to trade. There aren't enough people betting on things for deliberate misinformation to become worthwhile, IMO, and most people would stop betting after being in the wrong too often, unlike casinos which always let you win sometimes.
I believe the misinformation is largely by self-interested parties. Politicians as well as influencers trying to push agendas, and the engagement/attention farming for advertising revenue, which are largely indifferent to truth.
I've been miserable over the last few weeks after coming to that same conclusion. Its so bad that i doubt the people that were pulling the strings can even tell whats going on anymore.
I thought a lot last night about how we could protect HN, I didn't come up with a good answer except maybe you'll need to have someone with a higher reputation vouch aka invites. My internet community journey has mostly just been irc -> dA -> twitter -> HN. Too frequently these days I feel I might be putting emotional energy into something that isn't human on this site, hard to express how that makes me feel, but it's not pleasant at all. 힝
We can't. This forum is run by the company that used to be run by Sam Altman and it's already full of people who work in the industry that's driving AI adoption and who use and aggressively believe in AI to the point of religion. There are already bot accounts posting, and humans posting comments filtered by AI. Most Show HNs are vibe coded.
There's nothing anyone can do about it. No matter how many guidelines dang deploys, no matter how much negative social pressure we apply (and we could apply much more but doing so would just run afoul of the tone policing of the guidelines) people will use AI because they want to, and because it's a part of their identity politics, specifically to spite people who don't want to see it. They currently bother to mention when they use ChatGPT for a comment. It's just a matter of time until people don't even bother, because it's so normalized.
The Fediverse is currently good, the culture there is rabidly anti-capitalist and anti-AI. I like Mastodon. But that will eventually, inevitably get ruined as well, and we'll just have to move on to the next thing.
The future of the internet is going to be invite-only enclaves. I sometimes wonder is anyone working on the next generation of discussion forums, or if it'll be a return to PHPBB.
This will create an interesting problem for newcomers to an area or hobby, since they will have few introduction points. It will work for some people, but exclude others.
They'll instantly become infiltrated with bots and include people based on arbitrary politics. Either the content is such that it makes zero sense to game or spam or it is lost already.
Signal to noise ratio is getting *lower (EDIT: was higher) than ever. I don't see a way out of this other than "human certified" digitally signed authorship (e.g. by using eIDAS in EU). There could be a proxy to at least retain pseudo-anonymity, but trackable to a human. Tragedy of commons strikes again.
"Tragedy of commons" is a false concept that obscures greed and selfishness and often lawlessness. Even its originator (Hardin) accepts that it does not describe actual history.
It could be interesting to have a search engine that only shows results that have human attestation via digital passports. Of course I'd prefer that to work without necessarily revealing the identity of the poster, similarly to how anonymous "sign-up tokens" for accounts would work, to prevent sybil attacks.
I was recently running into this while playing the latest Hollow Knight game. Several sloppified sites which obviously were trying to tailor mechanics/items of the original game into the new one. The new release is only ~six months old, so there is just not that much hard content available to reference.
My question is -why? Is it really worth the ad revenue to trick a few people looking into a few niche topics? Say you pick the top 5000 trending movies/music/games and generate fake content covering the gamut. What is the payback period?
>Is it really worth the ad revenue to trick a few people looking into a few niche topics?
Maybe it's problem space exploration via pollution? Said creators of pollution (bullshit asymmetry theory in practice) have very little cost in creating said pollution and there is the possibility of a payback larger than that cost.
If you live in a VLCOL country, and have access to free tooling (via various means) you only need a very small return to make it entirely worth your while.
I had a similar experience when I was looking for YouTube videos on the Intel i7-4790T, it's a relatively obscure CPU that was only found on small-form factor pre-builds during the Haswell era. The only recent videos I found were slop videos [1] narrating a script clearly generated by an LLM, with a link to their Amazon affiliate in the description. The CPU has never been put on retail sale! These channels upload a dozen times a day on random products just to get an affiliate commission.
It is true that as the cost to construct fake content has gone to zero, we need some kind of scalable trust mechanism to access all this information. I don't yet know what this is but a Web of Trust structure always seems appealing. A lot of people are going to be excluded, but such is life, I suppose.
If I were to be honest, going to where the fish aren't is also going to help. Almost certainly there are very few LLM generated websites on the Gemini protocol.
I'm setting up a secondary archiver myself that will record simply the parts of the web that consent to it via robots.txt. Let's see how far I get.
I think if a Web of Trust becomes common, it will create a culture shift and most people won’t be excluded (compared to invite-only spaces today). If you have a public presence, are patient enough, or a friend or colleague of someone trusted, you can become trusted. With solid provenance, trust doesn’t have to be carefully guarded, because it can be revoked and the offender’s reputation can be damaged such that it’s hard to regain. Also, small sites could form webs of trust with each other, trusting and revoking other sites within the larger network in the same manner that people are vouched or revoked within each site (similar to the town -> state -> government -> world hierarchy); then you only need to gain the trust of an easy group (e.g. physically local or of a niche hobby you’re an expert in) to gain trust in far away groups who trust that entire group.
You've always needed skepticism, of course. But it used to be if you came across an article about a super obscure video game from the early 90s (referencing the blog post here) you could be reasonably sure that it wasn't completely made up. There just wasn't the incentive to publish nonsense about super niche things because it took time and effort to do so.
Now you can collate a list of thousands of titles and simply instruct an LLM to produce garbage for each one and publish it on the internet. This is a real change, IMO.
Yeah when I was 10 someone told me not to believe everything I read too. But guess what, that's kinda useless advice because consulting reference material is a necessity and there are wide variations in the quality of reference material. This sort of 'don't trust anyone' heuristic can just as easily lead to conclusions that the earth is flat, the moon landing never happened, vaccinations are the leading cause of disease etc.
It comes down to Google's failure. Rather than outright defeating the SEO eldridge abomination by adopting a zero-tolerance policy to those tactics, Google made a mutually advantageous bargain with them of - course, leaving out a third party: us. They could do this because they had no competition. Now, the culture of enabling bad actors is, unfortunately, set.
Google did all the innovation it needed to and ever is going to. It needed to be broken up a decade ago. We can still do it now. Though I don't know how much it will save, especially if we don't also go after Apple, and Meta, and Microsoft.
It would be in Google's ultimate interest to label AI-generated websites and potentially rank them lower in search results.
AI needs to be kept up to date with training data. But that same training data is now poisoned with AI hallucination. Labelling AI generated media helps reduce the amount of AI poison in the training set, and keeps the AI more useful.
It also simply undermines the quality of search, both for human users and for AI tool use.
SEO is a slippery slope on both sides because a little bit is good for everyone. Google wanted pages it could easily extract meaning from, publishers wanted traffic, and users wanted relevant search results. Now there's a prisoners dilemma where once someone starts abusing SEO, it's a race to the bottom.
It might be more accurate to say that a lot of low-trust societies have become connected to the Internet which weren't nearly as online a couple of decades ago.
For example, a huge fraction of the world's spam originates from Russia, India and Bangladesh. And we know that a lot of the romance scams are perpetrated by Chinese gangs operating out of quasi-lawless parts of Myanmar. Not so much from, say, Switzerland.
The WWW has never been a high-trust place. Some smaller communities, sure, but anyone has always been able to write basically what they want on the internet, true or false, as long as it is not illegal in the country hosting it, which is close to nothing in the US.
The difference is that there historically weren't much to be gained by annoying or misleading people on the internet, so trolling is mainly motivated by personal satisfaction. Two things changed since then: (1) most people now use the internet as the primary information source, and (2) the cost of creating bullshit has fallen precipitously.
to be boring, the term "enshittification" was invented by one individual, recently, and has a specific meaning. it does not refer to "things just get worse" but describes a specific strategy adopted by corporations using the internet for commercial purposes.
AI is kind of like Skynet in the first Terminator movie. It now destroys our digital life. New autogenerated websites appear, handled by AI. Real websites become increasingly less likely to show up on people's daily info-feed. It is very strange compared to the 1990s; I feel we lost something here.
> The commons of the internet are probably already lost
That depends. If people don't push back against AI then yes. Skynet would have won without the rebel forces. And the rebels are there - just lurking. It needs a critical threshold of anger before they will push back against the AI-Skynet 3.0 slop.
And why not? We humans do things like this all the time. We act with powerful false beliefs. Misunderstand a situation or simply just the meaning of a word, and then build our world-view and lives around those false beliefs. Train your model on this, and replicate in those false beliefs.
You never could trust the internet. The difference is that now the problem is so widespread that it's finally spurring us into action, and hopefully a good "web of trust" or similar solution will emerge.
There's a lot of people unhappy about this here. Presuming that the sentiment extends beyond HN, then it might be a problem that you could make some money by solving. (In the same way that Google figured out how to let the net tell it which pages were best, and made an insane amount of money from doing so.)
People want something real, not AI slop or shills or astroturf or corpo-speak or any of a thousand other flavors of fake. People want it rather desperately. In fact, the current situation is bad for peoples' mental health. Can someone figure out how to give people a much higher percentage of real?
I've been hitting this a lot lately in Kagi. I'll search for instructions on how to do a thing and some random website will have nothing but _hard_ AI slop going off about the thing I was looking up.
It must be easier than ever to build content mills these days.
A big part of my annoyance is that in the past, something like Phantasy Star Fukkokuban would not really be worth lying about; people need a reason to lie.
WD-42|15 days ago
The open internet has been going downhill for a while, but LLMs are absolutely accelerating it's demise. I was in denial for the last few years but at this point I've accepted that the internet I grew up on as a kid in the late 90s to mid 2000s is dead. I am grateful for having experienced it but the time has come to move on.
The future for people that valued what the early internet provided is local, trusted networks in my opinion. It's sad that we need to retreat into exclusionary circles but there are too many people interested in making a buck on the race to the bottom.
cortesoft|15 days ago
PaulDavisThe1st|15 days ago
xantronix|15 days ago
majicDave|15 days ago
giancarlostoro|15 days ago
archagon|15 days ago
It's probably too impractical to work as described, but I think that having a digital space constrained by physical access would be meaningful in a way that internet communities are not. The people you chat with would necessarily be the people in your physical environment, which would make it feel more like a local hangout than the typically vapid social media exchange.
(On further reflection, it would probably be easier to make a mesh network app version of this. Hmm...)
anigbrowl|15 days ago
ethbr1|15 days ago
Email in profile (deref a few times)
amelius|15 days ago
You could also, for instance, develop your own DNS alternative.
grahamburger|15 days ago
shevy-java|15 days ago
Perhaps AI-Skynet will not win - but they have a lot of money. I think we need to defund those big corporations that push AI onto everyone and worsen our lives.
klysm|15 days ago
lofaszvanitt|15 days ago
[deleted]
marginalia_nu|15 days ago
There's been a huge uptick in this sort of brigade like behavior around current events. First noted it around LK99, that failed room temperature semiconductor in 2023, but it just keeps happening.
Used to be we only saw it around elections and crypto pump and dumps, now it's cropping up in the weirdest places.
3eb7988a1663|15 days ago
lofaszvanitt|15 days ago
digiown|15 days ago
I believe the misinformation is largely by self-interested parties. Politicians as well as influencers trying to push agendas, and the engagement/attention farming for advertising revenue, which are largely indifferent to truth.
tylergetsay|14 days ago
wasmainiac|15 days ago
Superconductor
eterm|15 days ago
Previously you might get burned with some bad information or incorrect data or get taken in by a clever hoax once in a while.
Now you get overwhelmed by regurgitation, which itself gets fed back into the machine.
The ratio of people to bots reading is crashed to near zero.
We have burned the web.
lazystar|15 days ago
pixl97|15 days ago
mycall|15 days ago
neom|15 days ago
krapp|15 days ago
There's nothing anyone can do about it. No matter how many guidelines dang deploys, no matter how much negative social pressure we apply (and we could apply much more but doing so would just run afoul of the tone policing of the guidelines) people will use AI because they want to, and because it's a part of their identity politics, specifically to spite people who don't want to see it. They currently bother to mention when they use ChatGPT for a comment. It's just a matter of time until people don't even bother, because it's so normalized.
The Fediverse is currently good, the culture there is rabidly anti-capitalist and anti-AI. I like Mastodon. But that will eventually, inevitably get ruined as well, and we'll just have to move on to the next thing.
benhurmarcel|15 days ago
Devasta|15 days ago
nicbou|13 days ago
I wonder if there is another way.
FranklinJabar|15 days ago
marginalia_nu|15 days ago
mnau|15 days ago
PaulDavisThe1st|15 days ago
https://news.ycombinator.com/item?id=46623359
pino999|15 days ago
alainx277|14 days ago
varjag|15 days ago
mycall|15 days ago
3eb7988a1663|15 days ago
My question is -why? Is it really worth the ad revenue to trick a few people looking into a few niche topics? Say you pick the top 5000 trending movies/music/games and generate fake content covering the gamut. What is the payback period?
pixl97|15 days ago
Maybe it's problem space exploration via pollution? Said creators of pollution (bullshit asymmetry theory in practice) have very little cost in creating said pollution and there is the possibility of a payback larger than that cost.
bombcar|15 days ago
nwhnwh|15 days ago
47thpresident|15 days ago
[1] https://www.youtube.com/watch?v=YpHUBC681iU https://www.youtube.com/watch?v=0w5a33Jeen0
arjie|15 days ago
If I were to be honest, going to where the fish aren't is also going to help. Almost certainly there are very few LLM generated websites on the Gemini protocol.
I'm setting up a secondary archiver myself that will record simply the parts of the web that consent to it via robots.txt. Let's see how far I get.
armchairhacker|15 days ago
ptrl600|15 days ago
gustavus|15 days ago
1. Don't believe everything or anything you read or see on the Internet.
2. Never share personal information about yourself online.
3. Every man was a man, every woman was a man and every teenager is an FBI agent.
I have yet to find a problem with the Internet thats isn't because of breaking one of the above rules.
My point being you couldn't ever trust the Internet before anyways.
WD-42|15 days ago
Now you can collate a list of thousands of titles and simply instruct an LLM to produce garbage for each one and publish it on the internet. This is a real change, IMO.
PaulDavisThe1st|15 days ago
3a. ... and nobody knows if you're a dog.
anigbrowl|15 days ago
underlipton|15 days ago
Google did all the innovation it needed to and ever is going to. It needed to be broken up a decade ago. We can still do it now. Though I don't know how much it will save, especially if we don't also go after Apple, and Meta, and Microsoft.
avidiax|15 days ago
AI needs to be kept up to date with training data. But that same training data is now poisoned with AI hallucination. Labelling AI generated media helps reduce the amount of AI poison in the training set, and keeps the AI more useful.
It also simply undermines the quality of search, both for human users and for AI tool use.
dehrmann|15 days ago
SEO is a slippery slope on both sides because a little bit is good for everyone. Google wanted pages it could easily extract meaning from, publishers wanted traffic, and users wanted relevant search results. Now there's a prisoners dilemma where once someone starts abusing SEO, it's a race to the bottom.
incomingpain|12 days ago
rvz|15 days ago
On the internet no one knows if you're a dog, human or a moltbot.
CrzyLngPwd|15 days ago
ninjagoo|15 days ago
Enshittification strikes again.
And it doesn't have appear to have any means to rid itself of the bad apples. A sad situation all around.
PessimalDecimal|15 days ago
For example, a huge fraction of the world's spam originates from Russia, India and Bangladesh. And we know that a lot of the romance scams are perpetrated by Chinese gangs operating out of quasi-lawless parts of Myanmar. Not so much from, say, Switzerland.
digiown|15 days ago
The difference is that there historically weren't much to be gained by annoying or misleading people on the internet, so trolling is mainly motivated by personal satisfaction. Two things changed since then: (1) most people now use the internet as the primary information source, and (2) the cost of creating bullshit has fallen precipitously.
PaulDavisThe1st|15 days ago
fatherwavelet|15 days ago
We must live on different planets.
shevy-java|15 days ago
> The commons of the internet are probably already lost
That depends. If people don't push back against AI then yes. Skynet would have won without the rebel forces. And the rebels are there - just lurking. It needs a critical threshold of anger before they will push back against the AI-Skynet 3.0 slop.
mojomark|15 days ago
xtiansimon|14 days ago
stavros|15 days ago
AnimalMuppet|15 days ago
People want something real, not AI slop or shills or astroturf or corpo-speak or any of a thousand other flavors of fake. People want it rather desperately. In fact, the current situation is bad for peoples' mental health. Can someone figure out how to give people a much higher percentage of real?
nunez|15 days ago
It must be easier than ever to build content mills these days.
expedition32|15 days ago
And at that point does it even matter? Zuckerberg wins.
throwaway2027|15 days ago
nicole_express|15 days ago
LorenPechtel|15 days ago
But it's the date at which it is no longer possible to discern reality you can't actually observe.