jon37 | 2 years ago | on: Sora: Creating video from text
jon37's comments
jon37 | 3 years ago | on: Ask HN: How many are switching to Mastodon?
Not all censorship is de jure.
jon37 | 3 years ago | on: The endgames of bad faith communication
I would argue that nearly all advertisements fit this description. The field of advertising has achieved a massive technological leap over the past few decades.
jon37 | 4 years ago | on: Jack Dorsey says ‘hyperinflation’ will happen soon in the U.S. and the world
jon37 | 4 years ago | on: Universities have formed a company that looks a lot like a patent troll
In general I agree with with the EFF that this consortium and software patents in general are harmful. But this change to the headline seems like an editorial twist; it changes the framing of the story.
jon37 | 4 years ago | on: Once a bastion of free speech, the ACLU faces an identity crisis
jon37 | 5 years ago | on: Removing Holocaust Denial Content
As long as Facebook continues to recommend and target user-generated content based on engagement metrics, it will reward the sort of engagement generated by vicious deceptions such as holocaust denial. It is engineered to handle any post, including expressions of hate, by finding the most receptive audience possible for that post.
Rather than targeting a narrow swathe of Definitely Terrible content to censor, Facebook should target a wide swathe and rather than removing it, exempt it from the algorithmic feed, group recommendation input, etc. If my friend posts something hateful and I see it, I can respond and push back; if they post something hateful and only their friends who agree see it, and they also get a dozen different similarly hateful echo chambers in their group recommendations, that's far more destructive.
jon37 | 5 years ago | on: Additional steps we're taking ahead of the 2020 US Election
Automated recommendations of a human-curated set of content - e.g. Netflix recommendations for its suite of programming - are much less objectionable, because they can't amplify anything the organization has not intentionally decided to present. It's the combination of UGC and ML recommendations that presents problems.
jon37 | 5 years ago | on: Additional steps we're taking ahead of the 2020 US Election
jon37 | 5 years ago | on: Additional steps we're taking ahead of the 2020 US Election
I think a more nuanced and useful way to look at things is to think of Twitter as an amplification machine rather than a speech machine. I can say what I want out loud, I can write whatever letters I want, I can make my own website if I want, etc., but putting it on Twitter causes Twitter to amplify it. Many of these announced changes pertain to what Twitter chooses to amplify - and how - rather than what it permits people to say. (As far as I can tell, the only tweets they are actually removing are those that call for violence, a standard for censorship that seems quite reasonable.)
If we think in terms of how and when to amplify speech, rather than trying to figure out what kind of speech to censor, we can hit upon more workable improvements. Twitter's proposals here, under that framing, are a mixed bag.
Twitter provides several ways to amplify posts - some of which are intentional on the part of users, some not. For example, if I follow a person, I'm telling Twitter to show all that person's posts in my feed. If I reply to a tweet, I'm telling Twitter to show my post to that person in their notifications, and also show it to other people who engage with it. If I quote-rt a tweet, I'm telling Twitter to show it to everyone who follows me, alongside my commentary. Etc.
On the other hand, if I like a post, or engage with it in any way, I'm not telling Twitter to show it to anyone - but my Like may cause it to recommend the post to others, sort it upward in the algorithmic timeline, etc. This unintentional amplification can have unintended consequences, because the system cannot tell when engagement metrics are due to positive or negative characteristics of the post.
Quote-retweets are also rife with unintended consequences. If someone "dunks" on a post by quote-retweeting it with criticism or mockery, they're betting that their comment is going to lower the status of the person they are quoting or persuade people the post is false. But the folks reading their post may not agree - and the original post might have been an bad faith attempt at distraction, which a dunk then amplifies. Alternatively, if a popular account dunks on a much less popular account, it can (sometimes intentionally, sometimes not) trigger a wave of hostility and harassment.
So I like parts of Twitter's changes here - they have the right to try and amplify true information more than false information, and removing flagged posts from recommendations will do that. Additionally, removing recommended content from non-followed accounts from the algorithmic timeline is positive as well - it reduces unintentional amplification and puts more control in the hands of users. But their encouragement of the quote-retweet is concerning. They don't seem to realize how effective a weapon it can be.
I would argue that any automated recommendation of user-generated content needs to be carefully controlled, if not abolished altogether. Recommendation systems cannot distinguish between content with high engagement due to quality, and high engagement due to emotionally manipulative dishonesty or other negative factors. And specially interested (or bigoted) political actors, who are simply interested in "the most effective way to attack / promote X" rather than arriving at the most truthful position, can test and manipulate those recommendation systems far more effectively than folks trying to engage with nuance and good faith.
This "situation where lies so easily go viral" seems to me to have intensified starting in around 2014 to 2015 - when Twitter introduced the quote-retweet, and Facebook introduced the algorithmic timeline. I don't think "free speech" is the right framing for thinking about it. The recent phenomenon is not the existence of extremist political movements or medical misinformation, but rather, their amplification.
jon37 | 6 years ago | on: CO2 and Climate Task Force (AQ-9) (1980) [pdf]
jon37 | 6 years ago | on: CO2 and Climate Task Force (AQ-9) (1980) [pdf]
"At a 3% per annum growth rate of CO2, a 2.5℃ rise brings world economic growth to a halt in about 2025."
I wonder if attempts by the scientific community to persuade world leaders of the severity of this problem would have been more successful if this had been more emphasized, rather than inches of sea level rise, wildlife extinctions, effects on poor populations, etc. If there is one thing political and financial leaders understand, it is their own dependence on continued economic growth - and continued expectations of economic growth.
jon37 | 7 years ago | on: AresDB: Uber’s GPU-Powered Open-Source, Real-Time Analytics Engine