This presentation briefly mentions but then seems to mostly forget about "Elsagate" which they call the "Peppa Pig scandal".
James Bridle argues convincingly that the genre of bizarre YouTube videos which appeals to the toddler reptilian brain ( https://medium.com/@jamesbridle/something-is-wrong-on-the-in... ) is not created by hostile or evil actors but instead has evolved orgnically based on what stuff toddlers want to click on. Kids' click patterns reward more video themes like "Elsa tied up on train tracks kissing Spiderman", so the content industry crams more of that stuff into its new content.
The result, after a few iterations, would not have passed editorial controls at 1990s Nickelodeon (!), which would normally have halted the feedback loop, but with no one at the helm -- to "censor" or otherwise exert editorial control -- YouTube's kid-targeted videos are just a whole forest of weird.
Does YouTube want to allow their platform to become a laboratory for rapidly discovering local maxima in very young children's fantasy worlds? Do they have any choice? Should they step in and publish rules for what children's content is allowed? Should they hire some kind of human curator or editor to enforce those rules for child-focused videos? Should Web platforms act in loco parentis?
In this cases, in "the Peppa Pig scandal" style situation, the producers are machine-generating content that gets clicks and the consumers are children.
When the issue is the viral proliferation of "fake news" and hate speech, the content producers are people or state propaganda apparatuses, and the consumers & re-sharers are grown adults.
It seems like it's a different topic with maybe different guiding principles to decide how & whether to censor these different groups of consumers & producers.
Kids' videos on YouTube Kids and Censoring the internet are very different things.
It's like saying, we don't wanna sell alcohol, because if we did, we would have to sell alcohol to kids.
> Kids' click patterns reward more video themes like "Elsa tied up on train tracks kissing Spiderman", so the content industry crams more of that stuff into its new content.
To be fair many of the Tex Avery and Tom and Jerry cartoons with which almost everyone grew up with were a lot more wild than that, thankfully they weren’t censored back when we were kids.
The problem is that the whole YouTube UI is machine learned to maximize engagement, so that they can show a lot of ads. The algorithm will do whatever it can to get people to watch more YouTube. We notice the weird results when it comes to videos for toddlers, but the same thing is happening to adults, we just don't see it in quite the same way -- it's always easier to see self-destructive behavior and make attributions from the outside.
Ultimately, interacting with software that has been machine learned for a metric that doesn't serve you or your kids' interests amounts to deliberately swallowing a parasite.
I'm actually glad to see the conclusion. Everybody wants to censor what they don't like. There are people who believe free speech is not the ultimate value, but rather to protect people from 'harm'. In this case 'harm' being ideas they don't agree with.
The best stance is not to take a side, but to make sure both sides are civil in the expression of their beliefs.
> "Everybody wants to censor what they don't like."
This should be top comment on all HN discussions involving content moderation, so people can read it before they respond and think about whether what they are advocating makes sense. People these days so effortlessly are able to make that huge leap from "I don't personally like this thing" to "This thing must be suppressed/illegal for everyone."
You can say that, but from the beginning, governments have put limits to speech based on "harm" from speech. Slander, libel, threats are all illegal due to the contention they cause harm. Taking no side is also similar to this, taking no side in a conflict as long as they are civil requires one to decide what is "civil" in the first place. Are comments about race considered "threats" if one person is of the race targeted? How about coarse comments like telling someone to "kill themselves." It isn't so simple and even being neutral is a stance of a kind.
BTW, I don't think being neutral in a conflict like censoring or not is the wrong stance, but it IS a stance with its own value judgements.
I'm speaking for myself here but I think the Russian bots as well as astroturfing bots need to be censored. And not censored by government regulation but by the terms of service. There are many "active measure" Russian bots active on twitter. Twitter doesn't seem concerned at all mostly because those Russian bots are padding their numbers same goes for FB and reddit.
This 100000% over. Otherwise they are becoming the new kingmakers, manipulating people often without them even knowing. Incidentally there is a good documentary that is now on Amazon Prime called "The Creepy Line" that is worth watching.
>The best stance is not to take a side, but to make sure both sides are civil in the expression of their beliefs.
I agree with that. It would help if these companies stopped seeking "virality" and "eyeballs" or whatever metaphor you want to use of user engagement and "notoriety/reward".
Otherwise, this is exactly the same argument China's (or Russia's) censors would make. Absolutely not different in any way.
What both sides though? The alt-right isn’t in opposition to anything mainstream, they are anti-democratic and should be treated the same way antifa, farc, isis and others are.
Every time democracy fails to shut down movements like the alt-right, things end badly for democracy itself. The most used throve is stuff like nazism and fascism, but history actually have a lot of better examples.
Because both Napoleon and Caesar effectively ended democracy with applause, unlike Hitler.
I think it’s extremely dangerous to treat anti-democratic forces as equal to democratic ones. I’m a conservative by the way, so it’s not like I’m not conserved about the liberal bias in the tech sector. But the debates we have these days, about forcing platforms to include outright anti-democratic values is crazy.
Just because a belief is civilly expressed does not make it less harmful. For example: people casually believing that the Sandy Hook shooting was a conspiracy drummed up by the government is actively harmful to the families affected by that tragedy. What do you say to those people? How do you solve that problem? And how do you define 'civil'?
Because different platforms have different ranges of civility as well.
> There are people who believe free speech is not the ultimate value
A right to free speech is a recent, American-centric invention, rather than a natural cornerstone of democracy, a fact that often seems lost on Americans.
Slide 66 is particularly interesting, and articulates an observation that I've made:
> Tech firms are performing a balancing act between two
incompatible positions
> 100% commit to the American tradition that prioritises free speech for democracy, not civility
> 100% commit to the European tradition that favors dignity over liberty, and civility over freedom
It is a very American view that unrestricted freedom of speech is a requirement for a well-functioning democracy, and that any restriction of this beyond censoring direct calls to violence is evil.
As a European I have to admit that the American model is vastly superior. Europe had laws to protect "civilty" even before the great wars. American democracy is also more successful in general.
Europe just had the "luck" that censorship didn't become very necessary. In some places it was used extensively and those places are no more today.
And lastly, stripping someones voice implicates directly stripping someones dignity.
> It is a very American view that unrestricted freedom of speech is a requirement for a well-functioning democracy
It is similarly a very American view that unrestricted freedom of speech is a requirement for a well-functioning internet (well, not completely "unrestricted", but we're not here to argue nuance). Avoiding the obvious debate on which is better, the problem of choice exists in a global medium. Lest it become balkanized, you will have to choose an approach both as a company and as a set of laws. Wrt laws, restrictions are added much more often than they are removed so we should probably err on the side of fewer/limited-scope restrictions. I think most would prefer the greatest common factor of freedoms vs the lowest common denominator of restrictions.
I don't see any incompatibility in the real world.
In Europe you are free, unless you cause other people harm. If you cause other people harm, it must be decided, whether that was justified (self-defense) or not (criminal offense).
I would say, that in the US the term "freedom" is more liberally used to make a "criminal-offense" look like "self-defense", though I am a bit cynical now.
Dignity means also, that you can speak your mind freely.
It's interesting that this document calls Arab Spring "the high point in positivity" of the internet. From what I understand, most countries are even worse off now. Libya in particular is still stuck in civil war to this day.
The Arab Spring was about standing up to the oppressive regimes. This might be why they referred to as "the high point in positivity". But as you noted, this in turn created a massive power vacuum followed by seemingly endless chaos and instability in the region. However, most people are ignorant of it because it's seldom discussed in the media. Why it's not discussed in the media then becomes a political debate.
“Under section 230 of the Communications Decency Act,
tech firms have legal immunity from the majority of the
content posted on their platforms (unlike ‘traditional’ media publications). This protection has empowered YouTube, Facebook, Twitter and Reddit to create spaces for free speech without the fear of legal action or its financial consequence.”
Censoring content is suspiciously like editorializing content, and steps social media a lot closer than they want to being publishers and not platforms.
Proceeding with censorship seems to carry the risk of losing immunity under Section 230.
Proceeding with censorship seems to carry the risk of losing immunity under Section 230.
Nope. The whole point of Section 230 is platforms can make and enforce rules, and still have a safe harbor. Otherwise, they all would have lost their safe harbor decades ago.
This is a very lazy way to view things, considering oneself as a plattform instead of a publisher I mean. We all agree that the press by itself is free in the Western world, including the US. Sovthere is, as a direct consequence, nothing wrong with editing.
By being a platform you can basically have it both ways, compete with journalism without being journalism. And once that is causing issues you say freedom of speech and you are of the hook.
Maybe in the begining that was even true. Now, not so much anymore I guess.
Everything else was rather unsurprising, pretty much what I expected. But that was the part that I had to read again. Hoping they just copied that from a 2013 pamphlet. If they consider Libya's slave markets ( https://www.cnn.com/2017/11/14/africa/libya-migrant-auctions... ) and Sirya the high point of positivity, I'd wonder what they consider the low point.
This is a reasonable point to make as a consequentialist with 20/20 hindsight.
But it's fair to value the act of getting freedom from oppressive regimes as a good in itself independent of the power vacuum it creates. And it's fair to assess a movement based on its priors of a possible positive outcome, rather than the purely retrospectively once they haven't come to pass.
I just finished skimming the whole thing—did Google just trick Breitbart into publishing propaganda for them? (I'm reminded of the Valve "employee handbook.")
Not really. It'll have the same result as literally every other revelation in the media.
The side completely, 100% opposed to any restrictions on speech (including "policing tone" as outlined in this presentation) will see this as corporate meddling in people's expression.
The side that supports speech regulation will see this as a structured plan to curb the kind of speech that's considered "harmful" or whatnot.
This presentation doesn't contain anything outside the current ideological dichotomy, no original or unorthodox thoughts or ideas.
If they did, it was a serious own goal. For the last year, everything coming out about Google has been pretty negative, and this seems to fit the pattern.
The underlying premise of this, if real, is that these companies and their centralized platforms ought to exist. "Never expect a man to understand something his salary depends upon him not understanding" and all that.
Well, given that it's done by Google employees of course they'll want to continue to wield their power. For example, they mention "regulation" as a punch-line, IOW, the assumption is that regulation of any form is bad.
dignity/civility should be a self determined value rather than one instated by authority. it is impossible to create a universal standard of conduct that will be adequate for everyone and we don't need to. you can call me a faggot or tell me to kill myself as much as you like and i really wont care at all, i want to see everything short of direct and malicious efforts to cause me real life harm (and arguably i want to know about those too). on the flip side you have older users who may be completely averse to course language or any interaction that falls beyond tv standards of etiquette, things that are relatively tame by internet standards may be completely offensive to them. how do you create a set of standards across your product that are adequate for both use cases without artificially limiting appeal to a single audience?
empower users with the tools to define their own experiences and, if they so choose, filter out what they don't want without assuming what that is. perhaps you could even let users display their version of a content rating so that others could see what content they will and will not filter rather than the communication breakdown that emerges when some messages are opaquely censored. you could even use an honorific system to better determine what is an adequate level of filtering between two users. you might not want to hear 'i'm gonna fucking kill you' from a total rando, but if it comes from your spouse you probably should. don't tell the customers what they want when you don't know the answer.
The most annoying part of this document is the watermark. It seems like a cogent summary of different ideas in the space - and doesn't advocate for "good censorship" (I think the title of the deck is provocative and unhelpful).
Here in 2018, I'm not that worried about Google's position as world's information arbiter and how it chooses to censor content. But I do worry a bit, as the path to hell is paved with good intentions. How will this evolve over the decades to come?
Where does the line between "hate speech" and legitimate criticism get drawn? Criticism is often crude. Humor is often crude. Sometimes humor, even crude humor, cuts to the core of an issue better than any intellectual discussion or essay could possibly yield. The U.S. itself has a long tradition of pointed, sarcastic political cartoons, as an example.
We need not reach back too far in memory to find an example of grey area between "hate speech" and "free speech": The Dutch cartoonist Kurt Westergaard and his infamous Muhammad bomb cartoon.
I think Google and other social media giants will find themselves in an impossible situation, if they haven't already. To be a good censor is to declare the "rightness" and "wrongness" of content in a consistent manner. However, in order to do so, you have to stake a position.
However, these companies sprawl too far and too wide to stake a position without alienating huge swaths of the population. And without making it all too easy for factions to believe that their side is being discriminated against.
The presentation mentions "global inconsistency" as a problem, but I disagree. Communities have different community standards, so they should have different moderation.
As an example, it is rather unfortunate that American obscenity standard is enforced against the world at large.
Censorship alone is not evil. The problem is, once you have infrastructure in place to make it easy, it will be abused. And if that infrastructure is centralized, the stakes for abuse are much higher. It's the same old problem of "who regulates the regulators". Rather than the issue being about "censorship bad / free speech good", I think the heart of it is: There's simply a lack of confidence that tech giants like Google are part of a larger system with checks and balances, and they appear to be making unilateral decisions behind closed doors.
It would be good for them to be more honest and transparent, and this leak is a nice step in that direction :-)
But to be serious, something like "We take these positions and stand behind these values and we're proud of it" would seem better than claim they are a neutral platform and they welcome all points of view and let users create and share whatever content.
This didn't even have to be a leaked document. They should have posted it in their "about" section right on the front page. If anyone doesn't like it they can go and make their own Google and share content there instead.
There was the leaked company meeting video after the 2016 elections with people crying and saying "we lost" and then users are supposed to believe those executives will turn around, wipe their tears, walk back to their desks and be unbiased when it comes to moderating news, search results, Youtube videos, charities they sponsor, etc? That's probably unrealistic... So why not drop the pretense and come out and be proud of what they support. Nobody will be surprised and many will welcome it including most of their employees.
I think there is a massive market opportunity here, and all the competitor has to do is build Google from 5-10 years ago - when you could actually still get decent results for your search, warts and all - as opposed to the wildly unrepresentative kindergarten/pollyanna picture of the internet they currently return.
Has anyone else ever wondered how some of the numbers about internet compares to a real world? Like,
* "2.6 million tweets contained anti-Semitic speech during the US presidential election" - how many conversations there where, in real word, about the same topic and contained anti-Semitic speech?
* "26% of American users are victims of internet trolling" - how many are victims of trolling in real word? Or in schools?
* "40% of internet users have been harassed online" - how many persons have been harassed at work places? Bars?
* Governments under cyber attack? How many of them are targets of espionage? Or have many companies are targets of espionage?
I'm not saying that these are not bad things, nor that those shouldn't be addressed - I'm just wondering if these things are as bad as they might seem to be, when compared to "the real world".
[+] [-] mherdeg|7 years ago|reply
James Bridle argues convincingly that the genre of bizarre YouTube videos which appeals to the toddler reptilian brain ( https://medium.com/@jamesbridle/something-is-wrong-on-the-in... ) is not created by hostile or evil actors but instead has evolved orgnically based on what stuff toddlers want to click on. Kids' click patterns reward more video themes like "Elsa tied up on train tracks kissing Spiderman", so the content industry crams more of that stuff into its new content.
The result, after a few iterations, would not have passed editorial controls at 1990s Nickelodeon (!), which would normally have halted the feedback loop, but with no one at the helm -- to "censor" or otherwise exert editorial control -- YouTube's kid-targeted videos are just a whole forest of weird.
Does YouTube want to allow their platform to become a laboratory for rapidly discovering local maxima in very young children's fantasy worlds? Do they have any choice? Should they step in and publish rules for what children's content is allowed? Should they hire some kind of human curator or editor to enforce those rules for child-focused videos? Should Web platforms act in loco parentis?
In this cases, in "the Peppa Pig scandal" style situation, the producers are machine-generating content that gets clicks and the consumers are children.
When the issue is the viral proliferation of "fake news" and hate speech, the content producers are people or state propaganda apparatuses, and the consumers & re-sharers are grown adults.
It seems like it's a different topic with maybe different guiding principles to decide how & whether to censor these different groups of consumers & producers.
[+] [-] mc32|7 years ago|reply
[+] [-] paganel|7 years ago|reply
To be fair many of the Tex Avery and Tom and Jerry cartoons with which almost everyone grew up with were a lot more wild than that, thankfully they weren’t censored back when we were kids.
[+] [-] cameldrv|7 years ago|reply
Ultimately, interacting with software that has been machine learned for a metric that doesn't serve you or your kids' interests amounts to deliberately swallowing a parasite.
[+] [-] fapjacks|7 years ago|reply
[+] [-] evilturnip|7 years ago|reply
The best stance is not to take a side, but to make sure both sides are civil in the expression of their beliefs.
[+] [-] ryandrake|7 years ago|reply
This should be top comment on all HN discussions involving content moderation, so people can read it before they respond and think about whether what they are advocating makes sense. People these days so effortlessly are able to make that huge leap from "I don't personally like this thing" to "This thing must be suppressed/illegal for everyone."
[+] [-] noobermin|7 years ago|reply
BTW, I don't think being neutral in a conflict like censoring or not is the wrong stance, but it IS a stance with its own value judgements.
[+] [-] mellow-lake-day|7 years ago|reply
[+] [-] lgleason|7 years ago|reply
[+] [-] mc32|7 years ago|reply
I agree with that. It would help if these companies stopped seeking "virality" and "eyeballs" or whatever metaphor you want to use of user engagement and "notoriety/reward".
Otherwise, this is exactly the same argument China's (or Russia's) censors would make. Absolutely not different in any way.
[+] [-] eksemplar|7 years ago|reply
Every time democracy fails to shut down movements like the alt-right, things end badly for democracy itself. The most used throve is stuff like nazism and fascism, but history actually have a lot of better examples.
Because both Napoleon and Caesar effectively ended democracy with applause, unlike Hitler.
I think it’s extremely dangerous to treat anti-democratic forces as equal to democratic ones. I’m a conservative by the way, so it’s not like I’m not conserved about the liberal bias in the tech sector. But the debates we have these days, about forcing platforms to include outright anti-democratic values is crazy.
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] fzeroracer|7 years ago|reply
Because different platforms have different ranges of civility as well.
[+] [-] dev_dull|7 years ago|reply
[+] [-] psergeant|7 years ago|reply
A right to free speech is a recent, American-centric invention, rather than a natural cornerstone of democracy, a fact that often seems lost on Americans.
[+] [-] toomanybeersies|7 years ago|reply
> Tech firms are performing a balancing act between two incompatible positions
> 100% commit to the American tradition that prioritises free speech for democracy, not civility
> 100% commit to the European tradition that favors dignity over liberty, and civility over freedom
It is a very American view that unrestricted freedom of speech is a requirement for a well-functioning democracy, and that any restriction of this beyond censoring direct calls to violence is evil.
[+] [-] raxxorrax|7 years ago|reply
Europe just had the "luck" that censorship didn't become very necessary. In some places it was used extensively and those places are no more today.
And lastly, stripping someones voice implicates directly stripping someones dignity.
[+] [-] kodablah|7 years ago|reply
It is similarly a very American view that unrestricted freedom of speech is a requirement for a well-functioning internet (well, not completely "unrestricted", but we're not here to argue nuance). Avoiding the obvious debate on which is better, the problem of choice exists in a global medium. Lest it become balkanized, you will have to choose an approach both as a company and as a set of laws. Wrt laws, restrictions are added much more often than they are removed so we should probably err on the side of fewer/limited-scope restrictions. I think most would prefer the greatest common factor of freedoms vs the lowest common denominator of restrictions.
[+] [-] zmix|7 years ago|reply
In Europe you are free, unless you cause other people harm. If you cause other people harm, it must be decided, whether that was justified (self-defense) or not (criminal offense).
I would say, that in the US the term "freedom" is more liberally used to make a "criminal-offense" look like "self-defense", though I am a bit cynical now.
Dignity means also, that you can speak your mind freely.
[+] [-] est|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] Aunche|7 years ago|reply
[+] [-] iliketosleep|7 years ago|reply
[+] [-] zaroth|7 years ago|reply
“Under section 230 of the Communications Decency Act, tech firms have legal immunity from the majority of the content posted on their platforms (unlike ‘traditional’ media publications). This protection has empowered YouTube, Facebook, Twitter and Reddit to create spaces for free speech without the fear of legal action or its financial consequence.”
Censoring content is suspiciously like editorializing content, and steps social media a lot closer than they want to being publishers and not platforms.
Proceeding with censorship seems to carry the risk of losing immunity under Section 230.
[+] [-] adamrezich|7 years ago|reply
[+] [-] joshuaheard|7 years ago|reply
[+] [-] ubernostrum|7 years ago|reply
Nope. The whole point of Section 230 is platforms can make and enforce rules, and still have a safe harbor. Otherwise, they all would have lost their safe harbor decades ago.
[+] [-] hef19898|7 years ago|reply
By being a platform you can basically have it both ways, compete with journalism without being journalism. And once that is causing issues you say freedom of speech and you are of the hook.
Maybe in the begining that was even true. Now, not so much anymore I guess.
Editing =|= Censorship, important to understand.
[+] [-] spangry|7 years ago|reply
[+] [-] manukin|7 years ago|reply
A whole lot of platforms could do with the conclusion.
Be consistent, be responsive and be transparent.
Who wouldn't love Google to be more transparent?
[+] [-] est|7 years ago|reply
Yes, millions of refugees is obviously not the consequence at all.
[+] [-] rdtsc|7 years ago|reply
[+] [-] zethraeus|7 years ago|reply
But it's fair to value the act of getting freedom from oppressive regimes as a good in itself independent of the power vacuum it creates. And it's fair to assess a movement based on its priors of a possible positive outcome, rather than the purely retrospectively once they haven't come to pass.
[+] [-] adamrezich|7 years ago|reply
[+] [-] shadowmore|7 years ago|reply
The side completely, 100% opposed to any restrictions on speech (including "policing tone" as outlined in this presentation) will see this as corporate meddling in people's expression.
The side that supports speech regulation will see this as a structured plan to curb the kind of speech that's considered "harmful" or whatnot.
This presentation doesn't contain anything outside the current ideological dichotomy, no original or unorthodox thoughts or ideas.
[+] [-] throwaway5250|7 years ago|reply
[+] [-] gfodor|7 years ago|reply
[+] [-] noobermin|7 years ago|reply
[+] [-] wstrange|7 years ago|reply
Probably not the spin that Breitbart was hoping for.
[+] [-] chaosite|7 years ago|reply
Most of the policy it advocates is basically being consistent and open about how Google is handling issues, and how censorship is being applied.
The title is click-baity, but that's not so bad.
[+] [-] odessacubbage|7 years ago|reply
[+] [-] beaner|7 years ago|reply
[+] [-] wslack|7 years ago|reply
[+] [-] chrisco255|7 years ago|reply
Where does the line between "hate speech" and legitimate criticism get drawn? Criticism is often crude. Humor is often crude. Sometimes humor, even crude humor, cuts to the core of an issue better than any intellectual discussion or essay could possibly yield. The U.S. itself has a long tradition of pointed, sarcastic political cartoons, as an example.
We need not reach back too far in memory to find an example of grey area between "hate speech" and "free speech": The Dutch cartoonist Kurt Westergaard and his infamous Muhammad bomb cartoon.
I think Google and other social media giants will find themselves in an impossible situation, if they haven't already. To be a good censor is to declare the "rightness" and "wrongness" of content in a consistent manner. However, in order to do so, you have to stake a position.
However, these companies sprawl too far and too wide to stake a position without alienating huge swaths of the population. And without making it all too easy for factions to believe that their side is being discriminated against.
[+] [-] sanxiyn|7 years ago|reply
As an example, it is rather unfortunate that American obscenity standard is enforced against the world at large.
[+] [-] 0xBA5ED|7 years ago|reply
[+] [-] rdtsc|7 years ago|reply
But to be serious, something like "We take these positions and stand behind these values and we're proud of it" would seem better than claim they are a neutral platform and they welcome all points of view and let users create and share whatever content.
This didn't even have to be a leaked document. They should have posted it in their "about" section right on the front page. If anyone doesn't like it they can go and make their own Google and share content there instead.
There was the leaked company meeting video after the 2016 elections with people crying and saying "we lost" and then users are supposed to believe those executives will turn around, wipe their tears, walk back to their desks and be unbiased when it comes to moderating news, search results, Youtube videos, charities they sponsor, etc? That's probably unrealistic... So why not drop the pretense and come out and be proud of what they support. Nobody will be surprised and many will welcome it including most of their employees.
[+] [-] growlist|7 years ago|reply
[+] [-] vhakulinen|7 years ago|reply
* "2.6 million tweets contained anti-Semitic speech during the US presidential election" - how many conversations there where, in real word, about the same topic and contained anti-Semitic speech?
* "26% of American users are victims of internet trolling" - how many are victims of trolling in real word? Or in schools?
* "40% of internet users have been harassed online" - how many persons have been harassed at work places? Bars?
* Governments under cyber attack? How many of them are targets of espionage? Or have many companies are targets of espionage?
I'm not saying that these are not bad things, nor that those shouldn't be addressed - I'm just wondering if these things are as bad as they might seem to be, when compared to "the real world".