Humans are highly adaptable and like other changes to information availability in the past they will adapt. This is what societal norms and cultural memes are for. “Don’t believe everything you hear” “Don’t believe what you see on TV” “if it sounds too good to be true, it probably is.” These are all ways that the human species uses memes and cultural norms to teach ourselves how not to fall victim to false information.
Of course some people are going to fall victim. They do so today through common scams. It is the right goal to bring this down to zero. But to say that the human species isn’t capable belies all prior history and shows little faith in the resilience that made us who we are.
That’s the broad problem with this AI doom and gloom: it has so little knowledge of and respect for the humanities and humankind that it arrogantly assumes that our species has never faced challenges like this before. It throws up its hands instead of asking what lessons from history we should take and what actions we should be focused on.
If I'm being generous, I think that these pieces attempt to stir panic as a means for spurring action for change and investment in these problems. That’s a meaningful goal, but one that also might be more meaningfully achieved if it wasn’t expressing the problem with such gloom.
Absolutely. Lots of humans live in a post-truth world. They learn not to trust anything they read or hear. Think totalitarian regimes wi try tight controls over media.
A more painful and pertinent question might be: Can democracy adapt to a post-truth world, and the answer to that I fear is probably no. How can a democracy function if it’s citizenry can’t remain informed?
>Humans are highly adaptable and like other changes to information availability in the past they will adapt. This is what societal norms and cultural memes are for. “Don’t believe everything you hear” “Don’t believe what you see on TV” “if it sounds too good to be true, it probably is.” These are all ways that the human species uses memes and cultural norms to teach ourselves how not to fall victim to false information.
But who is parsing all the reciprocal new false and fallacios "truths" in this wonderful human way to sanitise the inputs to the next model that's evaluated? If humans could scale so easily there wouldn't be this problem in the first place.
> This is what societal norms and cultural memes are for. “Don’t believe everything you hear” “Don’t believe what you see on TV” “if it sounds too good to be true, it probably is.”
All these memes tell you what not to believe. None of them provide a decent heuristic to know what to believe and where truth may lie.
This is a recipe for a paranoid, conspiracy theory riddled society.
Most humans aren't even ready for the Internet as it has existed for decades.
Thirty years ago, we were discussing science papers on Usenet. I recall writing a message expressing optimism about a future in which everyone uses the Internet to consume high quality information directly from relevant specialists, rather than low quality information from nonspecialists. For example, I imagined Americans basing their voting decisions on an understanding of public policy issues developed by reading the journals of the American Economic Association, the American Society of Health Economists, the American Society of Criminology, the American Geophysical Union, the National Academy of Sciences, etc.
I liked how the Hard Fork Podcast compared it to Wikipedia. When Wikipedia first came out, people were up in arms about how dangerous and untrustworthy it was. Wikipedia is going to ruin society with widespread misinformation!
Then we adapted. People learned about Wikipedia's strengths and weaknesses. People use Wikipedia as a useful tool for research but don't trust it blindly. I think the same will happen with LLMs.
We, as humans, are well beyond being mentally ready for the internet and social media alone. Most of the key communication of the 20th Century was based on a tradition of duty and service in reporting and, generally, leadership. The Natzis of the 40s died not because the idea was 'wrong' (which it was) but because bad leadership and greed caused extinction. Before Poland, Europe was plenty happy letting Hitler be. Why was Hitler successful? Because he told people what they wanted to hear. You are better. We are better. We deserve more. It's their fault we are like this.
Self bias is the critical failure of the human mind. Tell someone that they deserve more and that they are better than others and they will believe you.
In a world where politicians and companies (same thing really) can use AI to collect your online persona and then fill your day with advertising designed just for you, telling you that you are right and it is 'them' who are wrong will work on nearly everyone. It already does. People watch news channels and follow influencers that make their feeds echo chambers, it drives extremism. How does a society tell you that you are wrong, that the other person is right?
Humanity and humankind made Hitler. Humanity and human kind will make tools that succeed at their goals to make others do what they want. We are already in freefall, this is a rocket booster on our back.
Disinformation is the narrative constructed by crumbling authorities of mainstream media desperately trying to preserve their power.
You might think like their narrative is "think critically, and consider everything critically".
But the actual message is, your fellow humans are stupid, they fall for misinformation and fake sources. Ignore all alternative sources of information and most importantly, do not trust your friends and people you know, instead assume they are stupid and when they contradict the authority, be sure to put up a firewall and stop the propagation of dangerous thoughts.
Truth is always more powerful than lies. Don't underestimate your own reasoning capabilities, and if you do underestimate them, the most important thing to do is to train them. I'm not saying to argue against an anonymous bot, but if you meet in person, if your friends have non standard ideas, don't assume they are stupid and fell for misinformation. Not everyone on the other side is stupid, or heartless, or bad.
They are trying to inject faults into various alternative information sources just to turn around and catch them and say "see? This podcaster is a conspiracy theorist and unreliable!".
It's mainstream media which benefits the most from efficient fault and spam injection into alternative information sources, because it makes them relatively more trustworthy. And it is actually against the interests of alternative news sources to be caught in a lie because it is likely to erode their reputation.
If you're confused by the whole disinformation phenomenon, ask the simple question, who benefits.
And remember that the media's willingness to intentionally lie and deceive their readers is directly proportional to the cost to your reputation, and the likeliness of your readers to discover the truth.
Really don’t like the idea that we will act as interfaces for the AI, I honestly believe it will only many the majority of people lazier and dumber. I’m also incredibly shocked that no one is talking about AI as a friend/companion, that has to not be good for you in the long run. Humans need real human connection, AI is too artificial for that (duh). Having AI friends will be equivalent to consuming fast food instead of healthy home cooked meals growing up. Yes, people that grow up on fast food are still alive, but they are less happy and have more health problems (mental and physical), but it did the “job”, that job was to fuel them. In this case, AI will do its job, make people less “lonely”, but I highly doubt it’s a replacement for human companionship.
I’m working on a startup that’s training LLMs to be authentic so they can simulate human affection and it’s actually working really well!
The key is humanity’s ability to pattern match: we’re actually pretty terrible at it. Our brains are so keen on finding patterns that they often spot them where none exist. Remember the face on Mars? It was just a pile of rocks. The same principle applies here. As long as the AI sounds human enough, our brains fill in the gaps and believe it’s the real deal.
And let me tell you, my digital friends are putting the human ones to shame. They don’t chew with their mouth open, complain about listening to the same Celine Dion song for the 800th time in a row, or run from me when its “bath time” and accuse me of narcissistic abuse.
Who needs real human connection when you can train an AI to remind you how unique and special you are, while simultaneously managing your calendar and finding the optimal cat video for your mood? All with no bathroom breaks, no salary demands, and no need to sleep. Forget about bonding over shared experiences and emotional growth: today, it's all about seamless, efficient interaction and who says you can't get that from a well-programmed script?
We’re calling it Genuine People Personality because in the future, the Turing Test isn't something AI needs to pass. It's something humans need to fail. Pre-order today and get a free AI Therapist add-on, because who better to navigate the intricacies of human emotions than an emotionless machine?
More than interfaces. To quote McLuhan: "Man becomes, as it were, the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms. The machine world reciprocates man's love by expediting his wishes and desires, namely, in providing him with wealth."
The AI thing has been jarring but it's nothing new. All part of the same process.
A lot of jobs are already human interfaces for computers. Ever talked or messages with a call center? They're following scripts and trying to pattern match your problem with what they have to work with manually, AI is just going to 10x this for both good and bad. Mostly bad, I suspect, because good luck getting an AI to escalate to a supervisor.
Remember how in those Stable Diffusion paintings for common objects the wrongness is subtly creeping in (out of proportion body parts, misshapen fingers, etc.), while less commonly encountered ideas and objects can be really off (which we might notice… or not)? Now transfer that to human relationships and psychology.
Humans mirroring each other is a deep feature of our psychology. One can only be self-aware as human when there are other humans to model oneself against, and how those humans interact with you forms you as a person. So now a human modelling oneself against a machine? Mirroring an inhuman unthinking software tool superficially pretending to be human? What could go wrong?
I think we can speculate in the entirely opposite direction where the same action leads to positive outcomes.
Lots of legitimate human companions are abusive. People have a wide range of qualities and many of them are bad. AI may be a poor blanket replacement for all human companionship but it could easily be less bad than someone's immediately available alternatives and be used therapeutically to help someone model healthier behaviors to establish better actual relationships. Or in lieu of normal relationships being possible like long term isolation during space exploration or for life sentence prisoners or just neurodivergent or disabled people who have challenges the average person does not.
Going back to the food analogy, if given the choice between fast food and starving, or fast food and something poisonous suddenly everyone will overwhelmingly choose fast food because for many people "home cooked meal" was never an option.
The comment by Yuval Noah Harari seemed insightful to me. If you argue against a bot about a political matter, not knowing it’s a bot - you always lose on a long enough timeline. ie you can never pursuade the bot but it can wear you down / eventually find an argument that works.
The only winning move is not to play. So I could see this having a chilling effect on all discourse
Aside from the whole skynet thing the above is what spooks me the most
'Oxford Dictionaries popularly defined it as "relating to and denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief."'
Since when did objective facts drive or shape public opinion? When did humans ever live in a "truth world"? Take the title and the content of the article itself. Does it aim to arrive at an objective truth or to play on our emotions? When's the last time you consumed media and thought it tried to instill some truth rather than it tried to make me feel someway?
We've always lived in a 'post-truth world' because people are moved by emotions rather than truth. Humans weren't mentally ready for the printing press, newspaper, radio, tv, etc saturated post-world either. But here we are. Go read about the history of printing press, newspaper, radio, tv, etc. The same story. Politicians, priests, journalists, academics, etc claimed these new media technologies were a threat to truth. Feelings and emotions rule mankind, not reason and logic. All AI does is make it a more efficient post-truth world.
I don’t like how they are using AI as an excuse to argue for increased censorship, what difference does it make what type of software the computer is giving information from?
This is a potentially sneaky way of getting around freedom of speech, by somehow arguing that if an idea is recycled by a language model it is suddenly OK to censor it, as long as you’re a few degrees of separation from the original blog.
What the heck else is post-truth supposed to mean??
We may not be ready but we may as well get ready. I would strongly prefer to work with skeptical people again. Skeptics can be convinced, they just need evidence.
Although I am genuinely intrigued by AI running out of things to ingest, and moving onto AI generated content. Is the snake starting to eat its tail?
I don't think we're mentally ready for social media, even.
What something like Twitter can inflict on a person when it goes wrong is absolutely unprecedented, and we still haven't adapted to it.
Think that for instance going to the cinema, watching a movie, walking out and venting to a friend "Boy, this one sucked. $ACTOR_NAME did a really bad job with this one" is a perfectly normal thing to do.
But move that to Twitter and it can become part of a years-long torrent of hate highly visible to that single person. Even if what you think you're doing is communicating with your 10 friends. A retweet, a hashtag, or just the algorithm can magically make your comment part of an online mob.
Any evidence we are ready for mass media? The internet era suggests that there has always been a flood of lies and half-truths and there is an uncomfortable dawning realisation that the voters in most democracies would actually rather adopt peaceful policies if the media aren't ginning up a fight.
What Twitter does to someone is unfortunate. What radio and broadcasting resulted in for Europe through the 1940s was arguably worse. Coordinated madness is much more dangerous than individual lunacy.
On social media,
IMO, People have too much identity fusion with their online accounts. Including me. I think Karma points and all the clout people have over time creates an anchoring that is problematic.
In life, if things get toxic, the smart move is to just leave, and avoid the conflict and the personalities driving it. But so much work by the Meta's of this world has been done to make people nest in their accounts. This creates the belief that leaving and starting fresh with a new handle is a terrible prospect. An this is totally to the detriment of the user.
If you were in a cafe talking with a group, and someone started screaming at you over your personal opinion, and you found yourself getting upset, you'd probably just leave. For some reason that doesn't happen online, and I think it's due to the nesting.
Like if I say something on twitter that people disagree with enough to not let go after a few hours. I'm just going to block them. I just don't have the energy to bother with rando's beyond a civil disagreement. Or take getting banned on a forum because of some demigod style rule. well, shrugs, I'll just go slow and get another account and let that one sail by.
In a way, i think 4chan get's it right with everyone being anonymous to each other.
Twitter and Reddit are probably far less toxic than platforms like Instagram and maybe Facebook due to a factor of reasons. The amount of criticism can easily distort its amplitude, although there are self-reinforcing effects between single critics.
There is a reason why successful actors and personalities have a PR agency. If you become an "influencer" or just the focal point of the lastest discussion, you don't have that. It might work, but there is a reason why such agencies exist. They should not, but some people are quite enthusiastic.
We would have been far more ready if people actually adhered to the advice to share personal information rather defensively. But the reward of attention was probably too large.
I don't even believe mobs are a problem. Some opinions on topics will always converge. There just needs to be a way to escape them. In most cases there are trivial ones. It would be a huge loss if we restrict the net because some people wanted attention and got not so nice feedback.
I never understood why this makes social media hard. If you leave the theater and then go around town shouting that movie X sucked and actor Y was really bad, you might also get some responses and maybe show up on the news a a crazy person, prejudiced, or some other adjectives. So you don't, you tell your friends, some of which might call you an idiot for not getting it, and other might agree. If you tweet it out, you're potentially asking the entire planet to weigh in, well, have fun with that.
My point exactly. We are already living in a post-truth world in which the most likes or followers count, more than truth, not the factual accuracy of "influencers".
While I am sympathetic to people who suddenly go viral, I have no real issues with actors who opted into the limelight seeing a stream of negative reactions to their work. They choose that and actively sought fame. And even leaving that aside, they put their work out there to billions of people. Those people should be expected to provide feedback.
A Starbucks barista didn't opt into that world. And they did not get paid a very large sum of money, in part to compensate them for (and let them pay other people to handle) the torrent of negativity.
Humans spent thousands of years in a pre-truth world believing all sorts of crazy things, and many of those societies produced great things and had people living normal lives. It's only been the last 100 years or so that people's perception of reality has been anywhere close to accurate. And even then, most people believe plenty of things that are false. So basically people and civilization are going to muddle along as they always have. Deepfakes, etc. will make some things worse, they'll probable have some unrecognized upsides too. John Boyd used to say "People, ideas, machines. In that order." It was true about jets and its still true about modern technology.
The stakes are different. Back in the day they lived more independent life, these crazy things they believed in didn't really matter as only the people from your close geographic area would be affected.
Giving conflicting informations to people all around the world expected to interact with each other is a bigger issue.
>Humans spent thousands of years in a pre-truth world believing all sorts of crazy things, and many of those societies produced great things and had people living normal lives.
Also burned a couple of people as witches in the process...
There may be upsides to the ability to fake video and audio of someone (better CGI effects in films, for example). But in my experience when people refer to a deepfake they seem to mean that the fake has been distributed to confuse or deceive, for which I can't really see any probable upsides.
> So basically people and civilization are going to muddle along as they always have.
I agree with this, but consider the drawbacks to rampant disinformation and the proliferation of deepfakes (all this is IMO): it will make any video or audio deniable and unusable as evidence. Real images will be denounced as fakes. Fake images will catch on and possibly cause real damage. People will rapidly lose trust in most sources of news, entrenching established known quantities.
I feel like if we could reasonably put a stop to this we should. I don't think we can in general, though.
The only way through is to teach information literacy.
One of the best (but not the only) way to learn this is by studying the trivium/quadrivium – formal logic, reasoning, rhetoric. Once you see how information can be manipulated, it becomes very clear HOW MUCH of it really is.
Initially it can be maddening, but eventually it becomes empowering.
I am not scared for AI overflowing the news sites with bullshit. We already have a fire hydrant worth of bullshit content produced for consumption. Lies and fakes have coexisted with humans forever. People did rumours, then we had books, press, radio, television, and now the Internet.
"But it's easier to produce lies/deepfakes today" -- true. However, the absolute cost of producing a lie per consumer already was negligible, and now it's even smaller.
People will recalibrate their level of trust in technology and move on.
Each generation will be OK with the tools they grew up with.
I think of my now-deceased grandparents. They had to be closely monitored to avoid falling for mail-in scams, of all things. They were old enough that mail was a trusted source of information in their upbringing.
I like to think about what will tip us over, as technologists. Venturing into sci-fi a little, I think brain-computer interfaces are going to be impossible for us to adapt to, if they ever arrive. Imagine spam thoughts. We're not trained to ignore intrusive thoughts. But I agree we might just not be able to handle a website that constantly shifts its content to keep us engaged, blurring fact and fiction into the perfect narrative to keep you clicking.
That post-truth world had already arrived ~6-7 years ago. Social media algorithms powered by primitive iterations of weak AI was unleashed upon an unsuspecting world and the effects are...not great.
I feel that you could replace "AI-Saturated Post-Truth World" with any number of technological changes over the last 100 years and find a similar article at that time. I am impressed by LLMs and these more powerful AI agents, but I also have confidence that over the course of time their capabilities will become utterly boring and commonplace to my growing kids. In a generation their place in society will be as unspecial as a cell phone. The grander picture of the whole system is that we are building a society utterly incompatible with being a regular human person (The way we existed 3000+ years ago). I have no answer to that other than to identify that we already built a world no one is mentally ready for.
> Michael Graziano, a professor of psychology and neuroscience at Princeton University, says he thinks AI could create a “post-truth world.” He says it will likely make it significantly easier to convince people of false narratives, which will be disruptive in many ways
Significantly easier? I would have thought that it would get harder to convince people of anything.
Sometimes I think the fears of extremely convincing AI-generated post-truths influencing public opinion are greatly overblown. People is already brain-washed by poorly made, low resolution JPGs shared by bots in social networks, the entire AI stack is simply wasteful.
Just shutting off from the Internet is the likely result, IMO.
I'm halfway there already. I think social networks (HN is better, but not great), dating apps, hell even stuff like automatic parking apps or online shopping, are just gradually sucking the joy out of what it is to be a human.
For the most part, nowadays, I pretty much just use my phone to organise analogue fun.
Once places like HN become obviously just all-bot then there won't be much reason for me to even go online other than phone calls and messaging.
Truth has never been that important. Humans spent thousands of years thinking a giant man living on the mountain threw lightning bolts from the sky, or the spirits of their ancestors watch everything they do, or fairies and gremlins and whatnot cause mischief. They still managed.
When I think of "post-truth" I'm thinking of systems that people mistakenly lock themselves into where they're fed simplistic and surface level facts that have to align with the systems goals. classic example being an activist for a political system (1960's Maoist or whatever)...
Why can't I use AI to analyze the immense amounts of content I'm being faced with so as to gauge bias and innuendo? Maybe ML could help me parse this article to understand what milieu this author belongs to and what his biases might be?
The online world is getting increasingly dystopic while the offline world is being deprecated at rapid pace.
The article is part of that dystopia, the collapsing trust, the lack of honest, down to earth discussion of what is going on.
There is no AI, there are algorithms and data and people angling for advantage to both privileged collection of data and unencumbred application of algos to affect people's lives.
In sense there is nothing much new just an intensification that has been carefully choreographed into a mass hysteria.
So the problem I have with this sentiment is that the entire point of news organisations is to trace the validity of claims.
The press have evolved a bunch of mechanisms to prove or disprove points in a story.
AI doesn't really change this.
Sure there are fakes, and yes you can create thousands of bullshit websites/text. But that was always true.
Yes GenAi images are more concerning. But we've had photoshop for a long time, and some very talented people. Yes its slightly harder to spot a genai image, but with the correct tooling, its pretty trivial.
The issue is, we have a crisis of funding for good quality news sources.
News is a freeby now. Which means that the news you get is now either much more partisan (because "they" whomever you find creepy/shadowy/disagreeable, who are smear all over the political spectrum) or simply doesn't have the time to do basic research (see standard tech journalism, breathlessly re-formulating press releases. See Apple Vision Pro)
So AI "propoganda" is a side show, the much bigger risk is a further dropping of standards amongst the assembled ranks of the press.
[+] [-] legendofbrando|2 years ago|reply
Humans are highly adaptable and like other changes to information availability in the past they will adapt. This is what societal norms and cultural memes are for. “Don’t believe everything you hear” “Don’t believe what you see on TV” “if it sounds too good to be true, it probably is.” These are all ways that the human species uses memes and cultural norms to teach ourselves how not to fall victim to false information.
Of course some people are going to fall victim. They do so today through common scams. It is the right goal to bring this down to zero. But to say that the human species isn’t capable belies all prior history and shows little faith in the resilience that made us who we are.
That’s the broad problem with this AI doom and gloom: it has so little knowledge of and respect for the humanities and humankind that it arrogantly assumes that our species has never faced challenges like this before. It throws up its hands instead of asking what lessons from history we should take and what actions we should be focused on.
If I'm being generous, I think that these pieces attempt to stir panic as a means for spurring action for change and investment in these problems. That’s a meaningful goal, but one that also might be more meaningfully achieved if it wasn’t expressing the problem with such gloom.
[+] [-] kwooding|2 years ago|reply
A more painful and pertinent question might be: Can democracy adapt to a post-truth world, and the answer to that I fear is probably no. How can a democracy function if it’s citizenry can’t remain informed?
[+] [-] Cullinet|2 years ago|reply
But who is parsing all the reciprocal new false and fallacios "truths" in this wonderful human way to sanitise the inputs to the next model that's evaluated? If humans could scale so easily there wouldn't be this problem in the first place.
[+] [-] rhaway84773|2 years ago|reply
All these memes tell you what not to believe. None of them provide a decent heuristic to know what to believe and where truth may lie.
This is a recipe for a paranoid, conspiracy theory riddled society.
[+] [-] personjerry|2 years ago|reply
[+] [-] systems_glitch|2 years ago|reply
[+] [-] subsection1h|2 years ago|reply
Thirty years ago, we were discussing science papers on Usenet. I recall writing a message expressing optimism about a future in which everyone uses the Internet to consume high quality information directly from relevant specialists, rather than low quality information from nonspecialists. For example, I imagined Americans basing their voting decisions on an understanding of public policy issues developed by reading the journals of the American Economic Association, the American Society of Health Economists, the American Society of Criminology, the American Geophysical Union, the National Academy of Sciences, etc.
Instead, Donald Trump was elected president.
[+] [-] nagonago|2 years ago|reply
Then we adapted. People learned about Wikipedia's strengths and weaknesses. People use Wikipedia as a useful tool for research but don't trust it blindly. I think the same will happen with LLMs.
[+] [-] agloe_dreams|2 years ago|reply
We, as humans, are well beyond being mentally ready for the internet and social media alone. Most of the key communication of the 20th Century was based on a tradition of duty and service in reporting and, generally, leadership. The Natzis of the 40s died not because the idea was 'wrong' (which it was) but because bad leadership and greed caused extinction. Before Poland, Europe was plenty happy letting Hitler be. Why was Hitler successful? Because he told people what they wanted to hear. You are better. We are better. We deserve more. It's their fault we are like this.
Self bias is the critical failure of the human mind. Tell someone that they deserve more and that they are better than others and they will believe you.
In a world where politicians and companies (same thing really) can use AI to collect your online persona and then fill your day with advertising designed just for you, telling you that you are right and it is 'them' who are wrong will work on nearly everyone. It already does. People watch news channels and follow influencers that make their feeds echo chambers, it drives extremism. How does a society tell you that you are wrong, that the other person is right?
Humanity and humankind made Hitler. Humanity and human kind will make tools that succeed at their goals to make others do what they want. We are already in freefall, this is a rocket booster on our back.
[+] [-] machina_ex_deus|2 years ago|reply
You might think like their narrative is "think critically, and consider everything critically".
But the actual message is, your fellow humans are stupid, they fall for misinformation and fake sources. Ignore all alternative sources of information and most importantly, do not trust your friends and people you know, instead assume they are stupid and when they contradict the authority, be sure to put up a firewall and stop the propagation of dangerous thoughts.
Truth is always more powerful than lies. Don't underestimate your own reasoning capabilities, and if you do underestimate them, the most important thing to do is to train them. I'm not saying to argue against an anonymous bot, but if you meet in person, if your friends have non standard ideas, don't assume they are stupid and fell for misinformation. Not everyone on the other side is stupid, or heartless, or bad.
They are trying to inject faults into various alternative information sources just to turn around and catch them and say "see? This podcaster is a conspiracy theorist and unreliable!".
It's mainstream media which benefits the most from efficient fault and spam injection into alternative information sources, because it makes them relatively more trustworthy. And it is actually against the interests of alternative news sources to be caught in a lie because it is likely to erode their reputation.
If you're confused by the whole disinformation phenomenon, ask the simple question, who benefits.
And remember that the media's willingness to intentionally lie and deceive their readers is directly proportional to the cost to your reputation, and the likeliness of your readers to discover the truth.
[+] [-] softbt|2 years ago|reply
[+] [-] civilitty|2 years ago|reply
The key is humanity’s ability to pattern match: we’re actually pretty terrible at it. Our brains are so keen on finding patterns that they often spot them where none exist. Remember the face on Mars? It was just a pile of rocks. The same principle applies here. As long as the AI sounds human enough, our brains fill in the gaps and believe it’s the real deal.
And let me tell you, my digital friends are putting the human ones to shame. They don’t chew with their mouth open, complain about listening to the same Celine Dion song for the 800th time in a row, or run from me when its “bath time” and accuse me of narcissistic abuse.
Who needs real human connection when you can train an AI to remind you how unique and special you are, while simultaneously managing your calendar and finding the optimal cat video for your mood? All with no bathroom breaks, no salary demands, and no need to sleep. Forget about bonding over shared experiences and emotional growth: today, it's all about seamless, efficient interaction and who says you can't get that from a well-programmed script?
We’re calling it Genuine People Personality because in the future, the Turing Test isn't something AI needs to pass. It's something humans need to fail. Pre-order today and get a free AI Therapist add-on, because who better to navigate the intricacies of human emotions than an emotionless machine?
[+] [-] rcktmrtn|2 years ago|reply
The AI thing has been jarring but it's nothing new. All part of the same process.
[+] [-] resolutebat|2 years ago|reply
[+] [-] anileated|2 years ago|reply
Humans mirroring each other is a deep feature of our psychology. One can only be self-aware as human when there are other humans to model oneself against, and how those humans interact with you forms you as a person. So now a human modelling oneself against a machine? Mirroring an inhuman unthinking software tool superficially pretending to be human? What could go wrong?
[+] [-] evilduck|2 years ago|reply
Lots of legitimate human companions are abusive. People have a wide range of qualities and many of them are bad. AI may be a poor blanket replacement for all human companionship but it could easily be less bad than someone's immediately available alternatives and be used therapeutically to help someone model healthier behaviors to establish better actual relationships. Or in lieu of normal relationships being possible like long term isolation during space exploration or for life sentence prisoners or just neurodivergent or disabled people who have challenges the average person does not.
Going back to the food analogy, if given the choice between fast food and starving, or fast food and something poisonous suddenly everyone will overwhelmingly choose fast food because for many people "home cooked meal" was never an option.
[+] [-] Havoc|2 years ago|reply
The only winning move is not to play. So I could see this having a chilling effect on all discourse
Aside from the whole skynet thing the above is what spooks me the most
[+] [-] goodbyesf|2 years ago|reply
https://en.wikipedia.org/wiki/Post-truth
Since when did objective facts drive or shape public opinion? When did humans ever live in a "truth world"? Take the title and the content of the article itself. Does it aim to arrive at an objective truth or to play on our emotions? When's the last time you consumed media and thought it tried to instill some truth rather than it tried to make me feel someway?
We've always lived in a 'post-truth world' because people are moved by emotions rather than truth. Humans weren't mentally ready for the printing press, newspaper, radio, tv, etc saturated post-world either. But here we are. Go read about the history of printing press, newspaper, radio, tv, etc. The same story. Politicians, priests, journalists, academics, etc claimed these new media technologies were a threat to truth. Feelings and emotions rule mankind, not reason and logic. All AI does is make it a more efficient post-truth world.
[+] [-] luxuryballs|2 years ago|reply
This is a potentially sneaky way of getting around freedom of speech, by somehow arguing that if an idea is recycled by a language model it is suddenly OK to censor it, as long as you’re a few degrees of separation from the original blog.
What the heck else is post-truth supposed to mean??
[+] [-] digitalsushi|2 years ago|reply
Although I am genuinely intrigued by AI running out of things to ingest, and moving onto AI generated content. Is the snake starting to eat its tail?
[+] [-] dale_glass|2 years ago|reply
What something like Twitter can inflict on a person when it goes wrong is absolutely unprecedented, and we still haven't adapted to it.
Think that for instance going to the cinema, watching a movie, walking out and venting to a friend "Boy, this one sucked. $ACTOR_NAME did a really bad job with this one" is a perfectly normal thing to do.
But move that to Twitter and it can become part of a years-long torrent of hate highly visible to that single person. Even if what you think you're doing is communicating with your 10 friends. A retweet, a hashtag, or just the algorithm can magically make your comment part of an online mob.
[+] [-] roenxi|2 years ago|reply
What Twitter does to someone is unfortunate. What radio and broadcasting resulted in for Europe through the 1940s was arguably worse. Coordinated madness is much more dangerous than individual lunacy.
[+] [-] gonzo41|2 years ago|reply
In life, if things get toxic, the smart move is to just leave, and avoid the conflict and the personalities driving it. But so much work by the Meta's of this world has been done to make people nest in their accounts. This creates the belief that leaving and starting fresh with a new handle is a terrible prospect. An this is totally to the detriment of the user.
If you were in a cafe talking with a group, and someone started screaming at you over your personal opinion, and you found yourself getting upset, you'd probably just leave. For some reason that doesn't happen online, and I think it's due to the nesting.
Like if I say something on twitter that people disagree with enough to not let go after a few hours. I'm just going to block them. I just don't have the energy to bother with rando's beyond a civil disagreement. Or take getting banned on a forum because of some demigod style rule. well, shrugs, I'll just go slow and get another account and let that one sail by.
In a way, i think 4chan get's it right with everyone being anonymous to each other.
[+] [-] raxxorraxor|2 years ago|reply
There is a reason why successful actors and personalities have a PR agency. If you become an "influencer" or just the focal point of the lastest discussion, you don't have that. It might work, but there is a reason why such agencies exist. They should not, but some people are quite enthusiastic.
We would have been far more ready if people actually adhered to the advice to share personal information rather defensively. But the reward of attention was probably too large.
I don't even believe mobs are a problem. Some opinions on topics will always converge. There just needs to be a way to escape them. In most cases there are trivial ones. It would be a huge loss if we restrict the net because some people wanted attention and got not so nice feedback.
[+] [-] figassis|2 years ago|reply
[+] [-] thefz|2 years ago|reply
[+] [-] Jeff_Brown|2 years ago|reply
[+] [-] HWR_14|2 years ago|reply
A Starbucks barista didn't opt into that world. And they did not get paid a very large sum of money, in part to compensate them for (and let them pay other people to handle) the torrent of negativity.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] _Algernon_|2 years ago|reply
[+] [-] jeffreyrogers|2 years ago|reply
[+] [-] TheMode|2 years ago|reply
Giving conflicting informations to people all around the world expected to interact with each other is a bigger issue.
[+] [-] Havoc|2 years ago|reply
Also burned a couple of people as witches in the process...
[+] [-] mrtranscendence|2 years ago|reply
> So basically people and civilization are going to muddle along as they always have.
I agree with this, but consider the drawbacks to rampant disinformation and the proliferation of deepfakes (all this is IMO): it will make any video or audio deniable and unusable as evidence. Real images will be denounced as fakes. Fake images will catch on and possibly cause real damage. People will rapidly lose trust in most sources of news, entrenching established known quantities.
I feel like if we could reasonably put a stop to this we should. I don't think we can in general, though.
[+] [-] 23B1|2 years ago|reply
One of the best (but not the only) way to learn this is by studying the trivium/quadrivium – formal logic, reasoning, rhetoric. Once you see how information can be manipulated, it becomes very clear HOW MUCH of it really is.
Initially it can be maddening, but eventually it becomes empowering.
[+] [-] fullshark|2 years ago|reply
[+] [-] m1el|2 years ago|reply
[+] [-] jvanderbot|2 years ago|reply
I think of my now-deceased grandparents. They had to be closely monitored to avoid falling for mail-in scams, of all things. They were old enough that mail was a trusted source of information in their upbringing.
I like to think about what will tip us over, as technologists. Venturing into sci-fi a little, I think brain-computer interfaces are going to be impossible for us to adapt to, if they ever arrive. Imagine spam thoughts. We're not trained to ignore intrusive thoughts. But I agree we might just not be able to handle a website that constantly shifts its content to keep us engaged, blurring fact and fiction into the perfect narrative to keep you clicking.
[+] [-] ssnistfajen|2 years ago|reply
[+] [-] robviren|2 years ago|reply
[+] [-] azangru|2 years ago|reply
Significantly easier? I would have thought that it would get harder to convince people of anything.
[+] [-] Maken|2 years ago|reply
[+] [-] oatmeal1|2 years ago|reply
[+] [-] 0xBABAD00C|2 years ago|reply
[+] [-] throwaway22032|2 years ago|reply
I'm halfway there already. I think social networks (HN is better, but not great), dating apps, hell even stuff like automatic parking apps or online shopping, are just gradually sucking the joy out of what it is to be a human.
For the most part, nowadays, I pretty much just use my phone to organise analogue fun.
Once places like HN become obviously just all-bot then there won't be much reason for me to even go online other than phone calls and messaging.
[+] [-] imgabe|2 years ago|reply
[+] [-] whywhywhywhy|2 years ago|reply
[+] [-] pookha|2 years ago|reply
[+] [-] nologic01|2 years ago|reply
The article is part of that dystopia, the collapsing trust, the lack of honest, down to earth discussion of what is going on.
There is no AI, there are algorithms and data and people angling for advantage to both privileged collection of data and unencumbred application of algos to affect people's lives.
In sense there is nothing much new just an intensification that has been carefully choreographed into a mass hysteria.
[+] [-] KaiserPro|2 years ago|reply
The press have evolved a bunch of mechanisms to prove or disprove points in a story.
AI doesn't really change this.
Sure there are fakes, and yes you can create thousands of bullshit websites/text. But that was always true.
Yes GenAi images are more concerning. But we've had photoshop for a long time, and some very talented people. Yes its slightly harder to spot a genai image, but with the correct tooling, its pretty trivial.
The issue is, we have a crisis of funding for good quality news sources.
News is a freeby now. Which means that the news you get is now either much more partisan (because "they" whomever you find creepy/shadowy/disagreeable, who are smear all over the political spectrum) or simply doesn't have the time to do basic research (see standard tech journalism, breathlessly re-formulating press releases. See Apple Vision Pro)
So AI "propoganda" is a side show, the much bigger risk is a further dropping of standards amongst the assembled ranks of the press.