If we know that humans have all sorts of cognitive biases, how come it's ok to use that fact while at the same time we insist there's some kind of free market?
Say you discover that putting good-looking women next to cars causes the sale of cars to increase. Why does nobody question whether it is legitimate to do so? It's as if there's a line between actively lying ("Studies show that men who buy this car will find many many women attracted to them") and just putting it there suggestively, for some as yet undescribed but working cognitive bias to do its magic.
Some advertisers even make a joke out of it, eg the Lynx ads where the dude is thronged by a huge horde of women. It's a cliché, for a good reason.
I suppose most people will just say you have free will and it's your own fault for thinking what was suggested, but I sense this is more of a grey zone than most people are willing to admit. How can the free market work if everyone is so easily affected by suggestion?
---
Of course this also applies to the free market in ideas. In what sense are people free to make up their minds if it's decided for them what they should see, whether or not the government is doing it or FB? Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
There is absolutely no limit to this. Putting on a suit for a financial job interview exploits the cognitive biases of interviewers. Tattoos exploit the cognitive biases of hipsters. Putting hockey-stick growth projections in pitch decks exploits the cognitive biases of VCs. Equity grants (pretty much always) exploit the cognitive biases of startup employees. Driving a fancy car exploits the cognitive biases of (a very large portion of) the dating market.
Human beings are not computers. No aspect of human behavior is perfectly logical or rational. You simply cannot ban emotional appeals in principle, because all appeals have an emotional component. This way lies a dystopia far more terrible than "grandpa shared some nonsense on facebook".
Some economists have tried arguing that advertising adds value to products, but I think it only hinders people from making choices based on the product's merit. It's a distortion of the market.
We wouldn't lose much if advertising would just be forbidden. A traditional definition of it would probably be enough to get rid of most of it. The main problem is that huge sums of money are involved, and almost all media profit from it to some extent, so there's a huge incentive to shut down this conversation.
As an absurd example, if someone were to invent a ray-gun that reprograms people's brains with arbitrary beliefs it would be 100% unethical and illegal to use.
However, ads or fake news article that takes many exposures to reprogram you (by using the cognitive bias back channels as you mention) are perfectly fine. ¯\_(ツ)_/¯
There's this problem in the sciences and higher academe where we give things names that sound like something but mean another. For instance, Einstein's Special Theory of Relatively is a common example. Ask the average person on the street what it means and they'd say, "Well of course I know what it means, after all, it means that everything is relative."
Well obviously that's not what it means. What it means is that the speed of light is rather impossibly constant. Likewise, economics has this same issue. "Free Market" does not mean everyone has complete free will and has total immunity to persuasion. "Free Markets" defines a philosophy of trade where individuals can own and sell property. Whether or not people are 'easily affected by suggestion' as you put it, has got no impact on the concept of free markets.
I see the rest of society buying and liking the typical brands such as: Tide, Kraft, Nabisco, Pepsi-co, what have you and I shutter. It's really weird but I started to have an adverse reaction to all the companies that actually advertise and I'm always on the look out for companies that use few ingredients and in general have no commercials.
And this has changed me so much. Even seeing that stereo typical hot-rod style cars that you are clearly described as attracting women - I actively dislike people that show off even. I have never had an FB account.
I don't know where I'm going with this but yeah, cognitive biases and such and controlling people. I guess just trying to say, it doesn't work on everyone - not identically and as expected at least.
I suppose there are both philosophical and pragmatic reasons for the current situation:
1. Philosophical - you don't want to get the threat of physical force involved unless there is a very very good reason for doing so. That is, you don't want to make things illegal. So we forbid outright lying and fraud, openly misleading customers, but allow things that you describe. If you have a private online platform, why should you not be allowed to or be compelled to moderate its content? Yes, tech companies will try to maximize user engagement, but what else should they be trying to maximize? It is up to the rest of society to develop a proper culture and information hygiene to shape their demand, and tech companies will adapt. It can develop organically and passed from generation to generation, or you can try to expedite the process through schools, media and other mechanisms that exist in a society.
2. Practical - people tend to get interested by forbidden things, and they seem to like their biases. In the Soviet Union, for example, perhaps most people were fascinated by the Hollywood movies of the 70s and the 80s, and even the way the Western brands were being advertised. They seemed 'cooler' to the Soviet products that were mostly devoid of marketing. Try to eliminate most biases, and people will vote with their feet. Arguably a better approach is to allow things, but in good faith educate people on how to best deal with them and why that is the right way of dealing with them. Same way a lot of parents today would explain to their children why they should stay away from cigarettes, for example, or wash their hands before having food.
That point is sort-of pointless. It would be fair to boil a lot of the argument there down to "People don't act randomly. They have reasons for what they are doing. I think their reasons are bad".
I can't argue with that, but the alternatives are worse. If you centralise power, sooner or later the advertising exec gets control of the powerful body, and now you can't choose to resist even if you can see that what is happening is bad.
A key part of the free market is precisely that the world is actually quite predictable. The fact that people sometimes make predictably bad choices doesn't especially undermine the free market. The market doesn't require people make good choices, it just redirects resources to people who make better choices than the average.
> It's as if there's a line between actively lying ("Studies show that men who buy this car will find many many women attracted to them") and just putting it there suggestively
I doubt pretty girls in ads are actually supposed to mean the goods they advertise make a man more attractive. Who (except teens) would believe that, consciously or subconsciously? You just get attracted yourself and that's enough, simply seeing a pretty girl fires the hormones and neurotransmitters making you feel good about what she advertises, no semantic load necessary.
I studied "strategic communication" in college (a mix of PR, advertising, marketing, whatever) and I distinctly remember a mentor saying, "When you're selling a drill, you're not selling a drill. You're selling the hole."
The point is, people don't buy things for their own sake. They buy them for what they think the thing can do for them. Any car will get you to point B, but some people will go for a cheap, utilitarian car because they want to save money (or maybe the utilitarian aesthetic is their thing), while others will go for the flashy car for that feeling of sex appeal (even if women don't suddenly fall all over a new car owner, the feeling of confidence is a social benefit to the buyer, even if that's not worth the asking price).
More generally, though, all communication has this sort of color to it. We see anti-privacy legislation being touted as protecting children and fighting crime. Small talk is not really about sports. So I don't think it's realistic to legislate persuasion. I would probably be behind making formal logic a part of public school curriculum, though, so people are better equipped to discern for themselves when persuasion they're exposed to is nonsense (among other benefits).
I think there is much uncorroborated assertions about bias.
Associating emotions to a product or getting attention is an old trick that existed before modern marketing. The smoking example can be generalized for fashion and there are mechanisms like peer pressure involved that certainly incentivize consumption. But they are not overriding your will. Addictions can, but even then I would say there is still a free will involved.
> but I sense this is more of a grey zone than most people are willing to admit.
I do believe ads affect me, but the scope is limited and regulation would be more draconian than my natural inclination to give products an unjust bonus for boobs. However, it may not work, because I come to the conclusion that the product must lacking if you try to sell it with dirty tricks.
> Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
No, because people can make up their minds, otherwise they would have a lot of cars by now. They loose that, however, if you regulate too excessively because the decision is already made for you. There are sensible reasons for regulation, so it is a gray area, but I don't see it as helpful here.
Empirical counter evidence in favor of my free will for any practical purpose: There is no irresistible ad.
The wider culture is supposed to counterbalance for this.
Taking things to the extreme: maybe people are by default violent and will kill a few people in their lifetime to get their way. But the culture counterbalances this.
Likewise everyone old enough to have a sex drive has seen enough publicity with barely-clad attractive women to know the ruse. Of course, people are still vulnerable to these things (cf. onlyfans, etc.) but at some point you have to establish that the rules are clear and if people still want to indulge the fantasy of a beautiful woman by buying a car, well, whatever.
Sure, we still put backstops to this with drugs, gambling. But the use of sex as a tool for attention grabbing is way too wide a net to cast.
If you go that far then the entire legitimacy of legal systems, judicial systems, penal systems, world order etc falls apart. Free will does not really exist, but what is the alternative to pretending that it exists.
Recent scholarship engaging with the impact of digital technology on contract law has suggested that practitioners and researchers will need to give proper consideration to the ‘role of equitable remedies in the context of contracts drafted in whole or part in executable code’. More generally, a raft of challenges stem from the increasingly active role digital technologies play in contractual relations.
Faced with these challenges, instinct may dictate attempting to tame the technological beast with a variety of regulatory responses spanning the full spectrum of possibilities, from legal requirements to voluntary codes of conduct or standards. While regulatory action may be a priority from a public policy perspective, the seeming trustworthiness of algorithms, and the consequent reliance placed on them by contracting parties carry the inherent risk of lack of autonomy and fully‐informed independent decision‐making that, in Australia at least, is addressed by equity through the doctrine of undue influence.
This article explores whether this traditional doctrine can adapt to operate alongside regulation in dealing with some of the challenges presented by algorithmic contracting. Specifically, it focuses on those contracts where algorithms play an active role not only in the execution, but in the formation of the contract, as these are the “algorithmic contracts” that challenge the very fundamentals of contract law.
Cognitive biases are a real problem for free markets, but the question isn't whether free markets are perfect, it's how they compare to the alternatives.
People can make poor choices because of cognitive biases, or they can have choices made for them by other people with cognitive biases. The other people can be unelected, unaccountable leaders, or leaders that are chosen by voters, and politics seems to be where cognitive biases are worst.
In general, I would rather suffer for my own cognitive biases than the biases of elected officials and voters, but that's not to say I advocate for free markets in every scenario, because there is a lot more to consider than individual choice in that discussion.
With freedom, humans can control for this by learning from it. They can see that cars don't necessarily get you women despite of what the ads say. This can also be indirect learning by someone pointing it out.
If you start making certain things illegal to say, you can use that against people. For example, given enough money for lawyers, you can sue people for saying that "these cars don't get you women" using the same anti-free speech regulations by finding holes and exceptions in them. History has shown that lawyers can pull this off.
I saw a talk a while ago that argued the sexist marketing of home computers toward boys is likely responsible for the drop in women becoming computer programmers during the 90s.
Something like Lynx (called Axe here) is probably a great example for a different reason, being that it would not exist were it not for the very advertising that promotes it. The aversion for natural body odour (that is, not 'sweaty' smell) is given by advertising, same with the aversion for 'bad breath'.
Banning advertising would kill these products, because they add no value except to solve the problem they created.
Education is the best mitigation. Knowing that we have biases helps us recognize when it is happening. Most of the pushback to advertising that I've seen uses either asceticism or anti-consumerism language instead of cutting to the heart of what's going on.
> If we know that humans have all sorts of cognitive biases, how come it's ok to use that fact while at the same time we insist there's some kind of free market?
Of course humans aren't perfectly rational but I'd argue it's still a good assumption to make, as a society, because the alternative leads to a very disturbing path. Ultimately, assuming individuals don't really know what's best for themselves can be used to justify all kinds of authoritarian measures from limiting speech to straight up enslavement.
You can discuss persuasion or cognitive bias away until everyone gets an aneurysm, sure. Some example: You can't not communicate. So even not persuading is persuading. What if the presidential candidate would just not give a scheduled speech? How do you communicate information objectively? Casual language creates bias, so does scientific language, simple language, passive language, active language. "Cognitive Bias" might as well be called "Cognition", since it is just how the brain works. You have to think "tree" immediately when you see one, even before you validated that all leaves are real and that the whole thing is not a projection on an transparent screen. Otherwise you can't function.
But: Big tech throws us in a situation where a small group of people influences our perception on a massive scale. Facebook changes a sentence on their homepage and a billion people read it. Youtube raises some parameter (yeah I know that's not how AI works) by 0.01 and the political opinion about the Grenfell tower disaster changes ever so slightly - for 30 million people. Google's filter has a tiny hole and some troll broadcasts wrong medical information about gout to 200k people.
Every time one of these things happen, the world shakes. Dozens die or survive. Demonstrations form and elections swing. Opportunities are wasted and ideas surface.
I am not arrogant to actually propose an easy solution to this, and I don't think there is one. Just be aware that "I can always go and stab someone" is not a good argument when you are discussing a fully automated drone swarm with kill authority.
I agree scale is very important and it still has too little visibility. Everyone knows the quote "with great power there must also come great responsibility". Well, scale is indeed power. An action applied to one person might not be a big deal. But when the action is applied to or affects thousands, it should require much more consideration and carry much more responsibility. This argument has a lot of applicability in many other areas too.
Twitter CEO said we can give them the right to speak, but that doesn't mean the right to go viral. I think "viral" is an apt term - these are mind viruses that are 95% harmful.
To your point about scale, once a tweak is made and X or Y "news story" is let loose in the wild, it gets amplified. The scale of the impact isn't linear, it's something more exponential.
I keep coming back to my default idea here - that PII needs to be seen as legally owned by me and only licensed to others for use. The default legal framework should include medical / epidemiology research as freely licensed and commercial use as ... well let's just say i think my license conditions will be expensive.
If an advertising channel then shows ads that breach license they are liable. A fairly simple licensing process will come into play and we can find new ways to fund things
Edit: Yes i do get a lot of the issues around regulation of tech - it's almost like saying regulation of every day life which is really broad. And the different bodies and approaches will also need to be broad. But i am a believer in markets and individual decision making and i also believe that personal information has in the past few years become a genuine new ... commodity? And we need to raise that commodity into visibility - to be able to put prices on it openly. Maybe it won't work, maybe privacy is like a human right and can only be dealt with at that level - but i don't think so - privacy to me seems ephemeral and usually poorly defined. Longer discussion to be had
If you're a US citizen with a Facebook account, your value for Facebook is about $ 200 per year. That's the average, including children, seniors, etc. If you're a tech worker in your 30s, it's probably 3x as much.
If you want to use Facebook without targeted advertising, you need to either convince them that they don't need all that money, or pay it yourself. And that's just Facebook.
In other words: the internet economy without targeted ads will be a very different place. Facebook will survive. imgurl / snopes / fivethirtyeight? Unlikely...
I always wondered what's stopping a rival to Facebook that offers users a share of the money they are generating. Besides the scale barrier to entry of course. Generally speaking, people like money and will switch products and services that puts money in their pocket.
I mean, you already have that via the Facebook EULA for example. You give them a license to use your data however they want and in return you get access to Facebook. You can not accept their terms and in return they can not allow you to use Facebook.
Would these limits apply to essays written by people who have for good reason cultivated a following of people generally interested in what they write and positively inclined towards agreement with them?
Maybe this essay should be forced to be presented on essays.com without attribution and compete on the ideas within rather than the implied authority of the author.
(I happen to agree with a lot of the content, but couldn’t completely compartmentalize that this was a persuasive essay against persuasion.)
There are two hilarious subtexts that always accompany these sorts of arguments:
-It's okay to use persuasive technology to push political orientations I agree with
-We want to defend democracy but we implicitly agree as commentators above the fray that people can't be trusted to make the right decisions and have to be manipulated towards our preferred orientation
I sometimes ask myself, in 20 years will we begin to see class action lawsuits directed at technology platforms that use notification triggered dopamine releases for growing engagement?
imagine an advertisement so effective that anyone who saw it would immediately buy the product at whatever price asked as long as they could afford it.
Imagining this we can see that there is obviously some limit beyond which advertising must be curbed, and admitting this the question just becomes at what point must advertising actually be curbed?
Of course one could argue that such an advertisement must be for a product so wonderful and useful that everyone would want it - let us say immortality with youth and good looks - but if that were the case such a product should be recommended enough by word of mouth and the evidence of all the old people becoming young and good looking when taking it.
> The technology exists to take your likeness and morph it with a face that is demographically similar to you. The result is a face that looks like you, but that you don’t recognize. If that turns out to be more persuasive than coarse demographic targeting, is that okay?
I have wondered if in the future, movies/TV shows will be personalized with your name. I have found that when I watch a movie where the main character shares my name (and therefore other characters say the name a lot when talking to/about him), it makes the experience more immersive. It would seem fairly trivial to substitute in other names in a pretty smooth way (it would get tougher if you had to also adapt things like a business card that is visible on-screen, however).
Of course, this could be taken to the next level if you changed the faces of the actor via deepfake-like technology. I don't know how actors would feel about this sort of thing, but hey maybe it would open up the door for a bunch of new actors, who would essentially be a blank slate for customized faces. Imagine a world where Hollywood actors don't have to be good-looking!
>The New York Times once experimented by predicting the moods of readers based on article content to better target ads, enabling marketers to find audiences when they were sad or fearful
Can we maybe go one step back in the discussion and not only discuss what we should do about it, but simply ask does it even work?
There's Zuboff's book about surveillance capitalism that echoes much of what the blog post talks about, that recent netflix documentary that everyone was talking bout and so on, but how much evidence is there that this isn't all just mostly bullshit?
When the Cambridge Analytica scandal broke they used the buzzword 'psychographic targeting'. turns out, psychographic targeting doesn't even really work[1]. A relative recently sent me an article about China allegedly using mind-control helmets to control their thoughts of children attached with a picture of children wearing helmets with blinking lights. I have yet to see a facial emotion detection system that labels Harold[2] of the "Hide The Pain" meme fame as anything other than 'happy'.
I'm more afraid of how bogus all these systems are and the unquestioning power people attribute to tech, which itself enables these firms. It's no wonder they keep inviting Yuval Noah Harari for talks, they must feel flattered.
Let me be upfront about my bias: Since childhood I felt that TV commercials were a form of assault, assault on the mind. (So it wasn't too surprising to find out that it's literally the domestic use of war propaganda techniques. it's feels like assault because it is assault.) So, FWIW, I'm in favor of banning advertising more-or-less comprehensively. (And, yes, I realize that that sounds terrible to a lot of people for a lot of reasons. I'm not trying to persuade you that it's a good idea. That would be hypocritical, wouldn't it?)
Anyhow, if you go look at the Wikipedia entry for "Neurolinguistic Programming" you'll see that it's coated in warnings that it's a pseudo-science. So that's where we stand today in re: state-of-the art persuasion technology. Most folks have never heard of NLP, many that have are openly skeptical (verging on hostile) and some people have entered into the Information Revolution proper. (Meaning they know and use this body of knowledge.)
Given that we have allowed software to fall under patents, and that this knowledge constitutes the software of the mind, I'm reluctant to have it "go mainstream". I'd hate to imagine patent wars over IP that is essentially just structured thought...
On the other hand, ignorance of the "operating system" of the mind causes unspeakable suffering. (I myself was cured of severe depression, just to add a personal, anecdotal, note.) And the differential between folks who know and folks who don't is also problematical. That would seem to argue in favor of rapid and widespread dissemination.
Then there's the problem of the self-referential nature of persuasion and the limits thereof: can you limit persuasion if I can persuade you not to? Either persuasion tech doesn't work and so the laws are unnecessary, or it does work and can be used to affect its own regulation.
The tech and the society around it are the problems, not the solutions. The solutions are cultural. If you want to re-establish a private sphere after its complete erosion in the last 25 years, you will have to take on the entire leviathan. IMO, the next wave of disruptive technologies are going to be about providing just that.
A not-insignificant number of people don't even believe in psychology as a concept, so I'm thinking it's gonna be an uphill battle just to define the problem in a legislatively-useful fashion.
One of the comments has an amazing line “ Persuasion is at the heart of information security. Not “information technology security”, but the security a person has about their ability to make information-based decisions about which actions they can take in their best interest.”
That being said, its seems the event that has brought the concern over persuasive technologies to the fore is the election of Donald Trump. New sorts of persuasive technologies may have Trump over the finish line, but I'm pretty convinced that he ran a competitive campaign without them. He identified policies and cultural grievances (immigration, anti-elite sentiment, trade, and white identity politics for example ) that no one else was talking about, ran with them. I dislike that these subjects resonated with enough of my fellow Americans to win an election, but they did. To retreat into a mindset of "those rubes must have been tricked" is to deny them agency and is problematic in and of itself.
We live in the attention economy. We have medias monopolizing that attention.
It's the Supermarket of Ideas™, right? Maybe the Freedom Speeches™ and Freedom Markets™ camp followers could advocate for some competition.
I dunno, maybe something crazy, like a doctrine of giving interventions their fair share of oxygen. For example: After Alex Jones spins up the peanut gallery, trained psychiatrists can talk them all off the ledge. Make sure they all get a cookie and some nap time.
Highly interesting topic, I am currently planning to write my thesis about persuasive technology. If anyone has articles/ ideas/ stories/ books or papers to share I would highly appreciate it!
I thought about researching how humans react to persuasive technology and start self regulating by installing adblocker/deleting apps...
I am all ears if there are other interesting questions you might think of :)
> there are limits to how much alcohol you can drink
Pedantic, but there's only limits on how much you can buy while clearly drunk. Although, I suppose passing out is sort of a limit on how much you can drink.
The article is not honest for the way that it presents some of its support arguments it tries to make it seems as if society is solely affected by this technologies in a vacuum or as if polarization in America just happened magically but this is far from the truth when we look at technology like that we can’t do a proper analysis we become parrots of simple platitudes
[+] [-] lordnacho|5 years ago|reply
---
If we know that humans have all sorts of cognitive biases, how come it's ok to use that fact while at the same time we insist there's some kind of free market?
Say you discover that putting good-looking women next to cars causes the sale of cars to increase. Why does nobody question whether it is legitimate to do so? It's as if there's a line between actively lying ("Studies show that men who buy this car will find many many women attracted to them") and just putting it there suggestively, for some as yet undescribed but working cognitive bias to do its magic.
Some advertisers even make a joke out of it, eg the Lynx ads where the dude is thronged by a huge horde of women. It's a cliché, for a good reason.
I suppose most people will just say you have free will and it's your own fault for thinking what was suggested, but I sense this is more of a grey zone than most people are willing to admit. How can the free market work if everyone is so easily affected by suggestion?
---
Of course this also applies to the free market in ideas. In what sense are people free to make up their minds if it's decided for them what they should see, whether or not the government is doing it or FB? Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
[+] [-] stickfigure|5 years ago|reply
Human beings are not computers. No aspect of human behavior is perfectly logical or rational. You simply cannot ban emotional appeals in principle, because all appeals have an emotional component. This way lies a dystopia far more terrible than "grandpa shared some nonsense on facebook".
[+] [-] zupatol|5 years ago|reply
We wouldn't lose much if advertising would just be forbidden. A traditional definition of it would probably be enough to get rid of most of it. The main problem is that huge sums of money are involved, and almost all media profit from it to some extent, so there's a huge incentive to shut down this conversation.
[+] [-] deegles|5 years ago|reply
However, ads or fake news article that takes many exposures to reprogram you (by using the cognitive bias back channels as you mention) are perfectly fine. ¯\_(ツ)_/¯
[+] [-] missedthecue|5 years ago|reply
Well obviously that's not what it means. What it means is that the speed of light is rather impossibly constant. Likewise, economics has this same issue. "Free Market" does not mean everyone has complete free will and has total immunity to persuasion. "Free Markets" defines a philosophy of trade where individuals can own and sell property. Whether or not people are 'easily affected by suggestion' as you put it, has got no impact on the concept of free markets.
[+] [-] coding123|5 years ago|reply
And this has changed me so much. Even seeing that stereo typical hot-rod style cars that you are clearly described as attracting women - I actively dislike people that show off even. I have never had an FB account.
I don't know where I'm going with this but yeah, cognitive biases and such and controlling people. I guess just trying to say, it doesn't work on everyone - not identically and as expected at least.
[+] [-] agent008t|5 years ago|reply
1. Philosophical - you don't want to get the threat of physical force involved unless there is a very very good reason for doing so. That is, you don't want to make things illegal. So we forbid outright lying and fraud, openly misleading customers, but allow things that you describe. If you have a private online platform, why should you not be allowed to or be compelled to moderate its content? Yes, tech companies will try to maximize user engagement, but what else should they be trying to maximize? It is up to the rest of society to develop a proper culture and information hygiene to shape their demand, and tech companies will adapt. It can develop organically and passed from generation to generation, or you can try to expedite the process through schools, media and other mechanisms that exist in a society.
2. Practical - people tend to get interested by forbidden things, and they seem to like their biases. In the Soviet Union, for example, perhaps most people were fascinated by the Hollywood movies of the 70s and the 80s, and even the way the Western brands were being advertised. They seemed 'cooler' to the Soviet products that were mostly devoid of marketing. Try to eliminate most biases, and people will vote with their feet. Arguably a better approach is to allow things, but in good faith educate people on how to best deal with them and why that is the right way of dealing with them. Same way a lot of parents today would explain to their children why they should stay away from cigarettes, for example, or wash their hands before having food.
[+] [-] roenxi|5 years ago|reply
I can't argue with that, but the alternatives are worse. If you centralise power, sooner or later the advertising exec gets control of the powerful body, and now you can't choose to resist even if you can see that what is happening is bad.
A key part of the free market is precisely that the world is actually quite predictable. The fact that people sometimes make predictably bad choices doesn't especially undermine the free market. The market doesn't require people make good choices, it just redirects resources to people who make better choices than the average.
[+] [-] qwerty456127|5 years ago|reply
I doubt pretty girls in ads are actually supposed to mean the goods they advertise make a man more attractive. Who (except teens) would believe that, consciously or subconsciously? You just get attracted yourself and that's enough, simply seeing a pretty girl fires the hormones and neurotransmitters making you feel good about what she advertises, no semantic load necessary.
[+] [-] indigochill|5 years ago|reply
The point is, people don't buy things for their own sake. They buy them for what they think the thing can do for them. Any car will get you to point B, but some people will go for a cheap, utilitarian car because they want to save money (or maybe the utilitarian aesthetic is their thing), while others will go for the flashy car for that feeling of sex appeal (even if women don't suddenly fall all over a new car owner, the feeling of confidence is a social benefit to the buyer, even if that's not worth the asking price).
More generally, though, all communication has this sort of color to it. We see anti-privacy legislation being touted as protecting children and fighting crime. Small talk is not really about sports. So I don't think it's realistic to legislate persuasion. I would probably be behind making formal logic a part of public school curriculum, though, so people are better equipped to discern for themselves when persuasion they're exposed to is nonsense (among other benefits).
[+] [-] raxxorrax|5 years ago|reply
Associating emotions to a product or getting attention is an old trick that existed before modern marketing. The smoking example can be generalized for fashion and there are mechanisms like peer pressure involved that certainly incentivize consumption. But they are not overriding your will. Addictions can, but even then I would say there is still a free will involved.
> but I sense this is more of a grey zone than most people are willing to admit.
I do believe ads affect me, but the scope is limited and regulation would be more draconian than my natural inclination to give products an unjust bonus for boobs. However, it may not work, because I come to the conclusion that the product must lacking if you try to sell it with dirty tricks.
> Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
No, because people can make up their minds, otherwise they would have a lot of cars by now. They loose that, however, if you regulate too excessively because the decision is already made for you. There are sensible reasons for regulation, so it is a gray area, but I don't see it as helpful here.
Empirical counter evidence in favor of my free will for any practical purpose: There is no irresistible ad.
[+] [-] prionassembly|5 years ago|reply
Taking things to the extreme: maybe people are by default violent and will kill a few people in their lifetime to get their way. But the culture counterbalances this.
Likewise everyone old enough to have a sex drive has seen enough publicity with barely-clad attractive women to know the ruse. Of course, people are still vulnerable to these things (cf. onlyfans, etc.) but at some point you have to establish that the rules are clear and if people still want to indulge the fantasy of a beautiful woman by buying a car, well, whatever.
Sure, we still put backstops to this with drugs, gambling. But the use of sex as a tool for attention grabbing is way too wide a net to cast.
[+] [-] cblconfederate|5 years ago|reply
[+] [-] jacques_chester|5 years ago|reply
Just today, in fact, I read this article: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3697726
Recent scholarship engaging with the impact of digital technology on contract law has suggested that practitioners and researchers will need to give proper consideration to the ‘role of equitable remedies in the context of contracts drafted in whole or part in executable code’. More generally, a raft of challenges stem from the increasingly active role digital technologies play in contractual relations.
Faced with these challenges, instinct may dictate attempting to tame the technological beast with a variety of regulatory responses spanning the full spectrum of possibilities, from legal requirements to voluntary codes of conduct or standards. While regulatory action may be a priority from a public policy perspective, the seeming trustworthiness of algorithms, and the consequent reliance placed on them by contracting parties carry the inherent risk of lack of autonomy and fully‐informed independent decision‐making that, in Australia at least, is addressed by equity through the doctrine of undue influence.
This article explores whether this traditional doctrine can adapt to operate alongside regulation in dealing with some of the challenges presented by algorithmic contracting. Specifically, it focuses on those contracts where algorithms play an active role not only in the execution, but in the formation of the contract, as these are the “algorithmic contracts” that challenge the very fundamentals of contract law.
[+] [-] mrfredward|5 years ago|reply
People can make poor choices because of cognitive biases, or they can have choices made for them by other people with cognitive biases. The other people can be unelected, unaccountable leaders, or leaders that are chosen by voters, and politics seems to be where cognitive biases are worst.
In general, I would rather suffer for my own cognitive biases than the biases of elected officials and voters, but that's not to say I advocate for free markets in every scenario, because there is a lot more to consider than individual choice in that discussion.
[+] [-] AntiImperialist|5 years ago|reply
If you start making certain things illegal to say, you can use that against people. For example, given enough money for lawyers, you can sue people for saying that "these cars don't get you women" using the same anti-free speech regulations by finding holes and exceptions in them. History has shown that lawyers can pull this off.
[+] [-] swiley|5 years ago|reply
[+] [-] raverbashing|5 years ago|reply
Of course I don't think blandifying ads is the way to go. Making ads attractive is a cultural phenomenon.
An implicit suggestion of fame, class, social status is fine. It's part of the game. It's an implicit suggestion in the end.
But everything has a limit. What might be acceptable in an adult ad might not be so acceptable for a teenager or a kids ad.
More worrying is propaganda, which is misleading/false in non-obvious ways and not explicitly an ad.
[+] [-] rimiform|5 years ago|reply
Banning advertising would kill these products, because they add no value except to solve the problem they created.
[+] [-] benlivengood|5 years ago|reply
[+] [-] erichocean|5 years ago|reply
People should affirmatively seek out sales materials, not be inadvertently subjected to them.
[+] [-] olalonde|5 years ago|reply
Of course humans aren't perfectly rational but I'd argue it's still a good assumption to make, as a society, because the alternative leads to a very disturbing path. Ultimately, assuming individuals don't really know what's best for themselves can be used to justify all kinds of authoritarian measures from limiting speech to straight up enslavement.
[+] [-] rriepe|5 years ago|reply
[+] [-] blackbrokkoli|5 years ago|reply
The new thing, and the key point here is scale.
You can discuss persuasion or cognitive bias away until everyone gets an aneurysm, sure. Some example: You can't not communicate. So even not persuading is persuading. What if the presidential candidate would just not give a scheduled speech? How do you communicate information objectively? Casual language creates bias, so does scientific language, simple language, passive language, active language. "Cognitive Bias" might as well be called "Cognition", since it is just how the brain works. You have to think "tree" immediately when you see one, even before you validated that all leaves are real and that the whole thing is not a projection on an transparent screen. Otherwise you can't function.
But: Big tech throws us in a situation where a small group of people influences our perception on a massive scale. Facebook changes a sentence on their homepage and a billion people read it. Youtube raises some parameter (yeah I know that's not how AI works) by 0.01 and the political opinion about the Grenfell tower disaster changes ever so slightly - for 30 million people. Google's filter has a tiny hole and some troll broadcasts wrong medical information about gout to 200k people.
Every time one of these things happen, the world shakes. Dozens die or survive. Demonstrations form and elections swing. Opportunities are wasted and ideas surface.
I am not arrogant to actually propose an easy solution to this, and I don't think there is one. Just be aware that "I can always go and stab someone" is not a good argument when you are discussing a fully automated drone swarm with kill authority.
[+] [-] slx26|5 years ago|reply
[+] [-] carabiner|5 years ago|reply
[+] [-] chiefalchemist|5 years ago|reply
[+] [-] marmaduke|5 years ago|reply
[+] [-] lifeisstillgood|5 years ago|reply
If an advertising channel then shows ads that breach license they are liable. A fairly simple licensing process will come into play and we can find new ways to fund things
Edit: Yes i do get a lot of the issues around regulation of tech - it's almost like saying regulation of every day life which is really broad. And the different bodies and approaches will also need to be broad. But i am a believer in markets and individual decision making and i also believe that personal information has in the past few years become a genuine new ... commodity? And we need to raise that commodity into visibility - to be able to put prices on it openly. Maybe it won't work, maybe privacy is like a human right and can only be dealt with at that level - but i don't think so - privacy to me seems ephemeral and usually poorly defined. Longer discussion to be had
[+] [-] IfOnlyYouKnew|5 years ago|reply
If you want to use Facebook without targeted advertising, you need to either convince them that they don't need all that money, or pay it yourself. And that's just Facebook.
In other words: the internet economy without targeted ads will be a very different place. Facebook will survive. imgurl / snopes / fivethirtyeight? Unlikely...
[+] [-] disease|5 years ago|reply
[+] [-] marcinzm|5 years ago|reply
[+] [-] sokoloff|5 years ago|reply
Maybe this essay should be forced to be presented on essays.com without attribution and compete on the ideas within rather than the implied authority of the author.
(I happen to agree with a lot of the content, but couldn’t completely compartmentalize that this was a persuasive essay against persuasion.)
[+] [-] Lammy|5 years ago|reply
[+] [-] Bakary|5 years ago|reply
-It's okay to use persuasive technology to push political orientations I agree with
-We want to defend democracy but we implicitly agree as commentators above the fray that people can't be trusted to make the right decisions and have to be manipulated towards our preferred orientation
[+] [-] throwitaway1235|5 years ago|reply
[+] [-] JoshTko|5 years ago|reply
[+] [-] bryanrasmussen|5 years ago|reply
Imagining this we can see that there is obviously some limit beyond which advertising must be curbed, and admitting this the question just becomes at what point must advertising actually be curbed?
Of course one could argue that such an advertisement must be for a product so wonderful and useful that everyone would want it - let us say immortality with youth and good looks - but if that were the case such a product should be recommended enough by word of mouth and the evidence of all the old people becoming young and good looking when taking it.
[+] [-] gnicholas|5 years ago|reply
I have wondered if in the future, movies/TV shows will be personalized with your name. I have found that when I watch a movie where the main character shares my name (and therefore other characters say the name a lot when talking to/about him), it makes the experience more immersive. It would seem fairly trivial to substitute in other names in a pretty smooth way (it would get tougher if you had to also adapt things like a business card that is visible on-screen, however).
Of course, this could be taken to the next level if you changed the faces of the actor via deepfake-like technology. I don't know how actors would feel about this sort of thing, but hey maybe it would open up the door for a bunch of new actors, who would essentially be a blank slate for customized faces. Imagine a world where Hollywood actors don't have to be good-looking!
[+] [-] Barrin92|5 years ago|reply
Can we maybe go one step back in the discussion and not only discuss what we should do about it, but simply ask does it even work?
There's Zuboff's book about surveillance capitalism that echoes much of what the blog post talks about, that recent netflix documentary that everyone was talking bout and so on, but how much evidence is there that this isn't all just mostly bullshit?
When the Cambridge Analytica scandal broke they used the buzzword 'psychographic targeting'. turns out, psychographic targeting doesn't even really work[1]. A relative recently sent me an article about China allegedly using mind-control helmets to control their thoughts of children attached with a picture of children wearing helmets with blinking lights. I have yet to see a facial emotion detection system that labels Harold[2] of the "Hide The Pain" meme fame as anything other than 'happy'.
I'm more afraid of how bogus all these systems are and the unquestioning power people attribute to tech, which itself enables these firms. It's no wonder they keep inviting Yuval Noah Harari for talks, they must feel flattered.
[1]https://www.nature.com/articles/d41586-018-03880-4
[2]https://static.independent.co.uk/s3fs-public/thumbnails/imag...
[+] [-] carapace|5 years ago|reply
Anyhow, if you go look at the Wikipedia entry for "Neurolinguistic Programming" you'll see that it's coated in warnings that it's a pseudo-science. So that's where we stand today in re: state-of-the art persuasion technology. Most folks have never heard of NLP, many that have are openly skeptical (verging on hostile) and some people have entered into the Information Revolution proper. (Meaning they know and use this body of knowledge.)
Given that we have allowed software to fall under patents, and that this knowledge constitutes the software of the mind, I'm reluctant to have it "go mainstream". I'd hate to imagine patent wars over IP that is essentially just structured thought...
On the other hand, ignorance of the "operating system" of the mind causes unspeakable suffering. (I myself was cured of severe depression, just to add a personal, anecdotal, note.) And the differential between folks who know and folks who don't is also problematical. That would seem to argue in favor of rapid and widespread dissemination.
Then there's the problem of the self-referential nature of persuasion and the limits thereof: can you limit persuasion if I can persuade you not to? Either persuasion tech doesn't work and so the laws are unnecessary, or it does work and can be used to affect its own regulation.
[+] [-] motohagiography|5 years ago|reply
[+] [-] cblconfederate|5 years ago|reply
[+] [-] rhizome|5 years ago|reply
[+] [-] topkai22|5 years ago|reply
That being said, its seems the event that has brought the concern over persuasive technologies to the fore is the election of Donald Trump. New sorts of persuasive technologies may have Trump over the finish line, but I'm pretty convinced that he ran a competitive campaign without them. He identified policies and cultural grievances (immigration, anti-elite sentiment, trade, and white identity politics for example ) that no one else was talking about, ran with them. I dislike that these subjects resonated with enough of my fellow Americans to win an election, but they did. To retreat into a mindset of "those rubes must have been tricked" is to deny them agency and is problematic in and of itself.
[+] [-] specialist|5 years ago|reply
We live in the attention economy. We have medias monopolizing that attention.
It's the Supermarket of Ideas™, right? Maybe the Freedom Speeches™ and Freedom Markets™ camp followers could advocate for some competition.
I dunno, maybe something crazy, like a doctrine of giving interventions their fair share of oxygen. For example: After Alex Jones spins up the peanut gallery, trained psychiatrists can talk them all off the ledge. Make sure they all get a cookie and some nap time.
Call it the Milk & Cookies Fairness Doctrine.
[+] [-] HabitCaptology|5 years ago|reply
I thought about researching how humans react to persuasive technology and start self regulating by installing adblocker/deleting apps...
I am all ears if there are other interesting questions you might think of :)
[+] [-] j7ake|5 years ago|reply
[+] [-] stronglikedan|5 years ago|reply
Pedantic, but there's only limits on how much you can buy while clearly drunk. Although, I suppose passing out is sort of a limit on how much you can drink.
[+] [-] yurielt|5 years ago|reply