Maybe this was more of an intro/pitch to something I already support, so I wasn't quite the audience here.
But I feel that talking about the open social web without addressing the reasons current ones aren't popular/get blocked doesn't lead to much progress. Ultimately, big problems with an open social web include:
- moderation
- spam, which now includes scrapers bringing your site to a crawl
- good faith verification
- posting transparency
These are all hard problems and it seems to make me believe the future of a proper community lies more in charging a small premium. Even charging one dollar for life takes out 99% of spam and gives a cost to bad faith actors should they be banned and need another dollar to re-enter. Thus, easing moderation needs. But charging money for anything online these days can cause a lot of friction.
In my opinion, both spam and moderation are only really a problem when content is curated (usually algorithmically). I don't need a moderator and don't worry about spam in my RSS reader, for example.
A simple chronological feed of content from feeds I chose to follow is enough. I do have to take on the challenge of finding new content sources, but at least fore that's a worthwhile tradeoff to not be inundated with spam and to not feel dependent on someone else to moderate what I see.
Having worked on the problem for years, decentralized social networking is such as tar pit of privacy and security and social problems that I can't find myself excited by it anymore. We are clear what the problems with mainstream social networking at scale are now, and decentralization only seems to make them worse and more intractable.
I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.
I think moderation only works when individuals have the agency to choose for themselves what content/posts they see. Mastodon/fediverse sets a good example here - there is “general safety and theme” guards at instance level but whether you see “uspol” in your timeline or just posts of cat pics is entirely up to you.
Contrast this to the “medias” like Threads, Bluesky, etc - moderation becomes impossible just because of the sheer scale of it all. Somehow everyone feels compelled to “correct someone who is wrong” or voice an opinion even when the context does not invite one. This is just a recipe for “perpetual engagement”, not actual platform for social interaction (networking).
> Ultimately, big problems with an open social web include:
These two seem like the same problem:
> moderation
> spam
You need some way of distinguishing high quality from low quality posts. But we kind of already have that. Make likes public (what else are they even for?). Then show people posts from the people they follow or that the people they follow liked. Have a dislike button so that if you follow someone but always dislike the things they like, your client learns you don't want to see the things they like.
Now you don't see trash unless you follow people who like trash, and then whose fault is that?
> which now includes scrapers bringing your site to a crawl
This is a completely independent problem from spam. It's also something decentralized networks are actually good at. If more devices are requesting some data then there are more sources of it. Let the bots get the data from each other. Track share ratios so high traffic nodes with bad ratios get banned for leeching and it's cheaper for them to get a cloud node somewhere with cheap bandwidth and actually upload than to buy residential proxies to fight bans.
> good faith verification
> posting transparency
It's not clear what these are but they sound like kind of the same thing again and in particular they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see instead of a bunch of spam from anons that nobody they follow likes.
A lot of tech folks hate government ID schemes, but I think MDL with some sort of pairwise pseudonyms could help with spam and verification.
It would let you identify users uniquely, but without revealing too much sensitive information. It would let you verify things like "This user has a Michigan driver's license, and they have an ID 1234, which is unique to my system and not linkable to any other place they use that ID."
If you ban that user, they wouldn't be able to use that ID again with you.
The alternative is that we continue to let unelected private operators like Cloudflare "solve" this problem.
Charging money I suspect feels like more of a solution to people who would otherwise prefer something be free, then an actual solution though.
The one principle benefit it has is it provides a resource stream to fund other things - but that's kind of it, because the other assumptions essentially assume there are no monetary thresholds in the system - i.e that spammers and botnets rely on volume and thus can be priced out of the market.
And to an extent it's true - but there's a threshold problem. How many accounts do you need to pay for in order to seize narrative control of a space, or to effect a takeover by seizing positions of power like moderation positions?
People I think like to think "hundreds or thousands" - but at least temporally this is well within reach of motivated threat actors (i.e. consider bot farms which are literally just racks of stripped down actual smart phones being puppeted - that is not a trivial investment), but the other side of it is - it's probably closer to like, 10 relatively active personalities.
Those are important reasons, but there are other reasons as well, such as concentration of market power in a few companies, which allows those companies to erect barriers to entry and shape law in ways that benefit themselves, as well as simply creating network effects that make it hard for new social-web projects to establish a foothold.
> - spam, which now includes scrapers bringing your site to a crawl
What do you mean with "now"? If you've ever been in a competitive industry before, you're already used to the random DDoS, and if you've published a moderately successful website, you've dealt with misbehaving scrapers/user-agents too, like the ones that get stuck and get requesting random 100 URLs per second for weeks.
I'm guessing you're alluding to AI scrapers, but are they really that different from what we've already learnt to deal with on the public internet?
It is interesting how it became a norm to just blindly assume the more decentralized something is the better it is. There isn’t any evidence this is true. Reality isn’t so reducible.
A small cost to enter just means the capital still controls the narrative. As long as we can't even stop bullying physically in schools, we will not be able to have a civil social media. Start in the kindergarten and fix the problems with next generation or it will just get worse.
To check out other FediForum keynotes, many demos showing off innovative open social web software, and notes from the FediForum unconference sessions, go to https://fediforum.org (disclaimer: FediForum co-organizer here)
Interesting discussion and article. My work is mostly in the decentralized model of thinking. Many of the arguments assume that some degree of control is necessary at the scale we deal with. The difficulty of true decentralization reminds me of what the founders of our republic wrestled with: avoiding both the over-control of old hierarchies and governing structures (centralized power) and the chaos of anarchy (decentralized power). Their answer was a framework that limited power but preserved individual freedom. Perfection? no. But it is a very interesting experiment to be a part of.
We seem to be facing a similar balance today, only in digital context and on a global scale. Human nature being what it is, no system will ever be perfect. What we can do is build safeguards (structures that empower users and raise the cost of abuse) so that bad actors must find diminishing returns for bad behavior.
While I tend to support there being open social alternatives, I haven’t really seen the people behind them talk about the most important aspect: how will you attract and retain users? There has to be more to the value proposition than “it’s open”. The vast majority of users simply do not care about this. They want to be where their friends, family, and favorite content creators are. They want innovation in both content and format. Until the well intentioned people behind these various open web platforms and non-platforms internalize and act on these realities, the whole enterprise is doomed to be a niche movement that will eventually go out with a whimper.
The open social web's decentralization is just as dependent on relevant protocols and communities as it is on the hosting services on which they depend.
It's way easier to censor a decentralized social network if the majority of its nodes run on AWS, GCP and Azure, for instance.
What'd be great is if we could run these networks primarily from our personal devices (i.e. true edge computing), but the more the computing's pushed to the edge the harder it becomes to implement technically and socially.
nostr can do this. relays are lightweight enough to run on Android devices, Citrine ships one with a nice UI even. It's not p2p or anything but it works well enough to preserve your own note history and there a plans to extend its functionality beyond that.
I've never really got social media in any of its forms. I use messaging apps to stay in contact with people I like, but that's about it.
I skimmed this article, I still don't get it. I think group chats cover most of what the author is taking about, public and private ones. But this might be my lack of imagination. I feel there article, and by extension, the talk could have been a lot shorter.
But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.
Most of the article here is focused more on making sure any social media (be it chats, a public forum, or email) isn't hijacked by vested powers who want to spread propaganda or drown the user in ads. One approach to that focused in this article is decentralization, which gives a user the ability to take their ball and go home.
Of course, it's futile if the user doesn't care about wielding that power.
Group chats are where real people socialise with their actual friends now. Social media is where people consume infinite slop feeds for entertainment. The days of people posting their weekend on Facebook are long gone.
"The 19th reports at the intersection of gender, politics, and policy - a much-needed inclusive newsroom..." This isn't a problem with the distribution technology. This is a problem with the message, and its narrow niche.
The site's marketing is geared towards collecting donations in the US$20,000 and up range.
That doesn't scale. They don't have viewer counts big enough to make it on payments in the $10/year range. So that doesn't scale either.
The back-end technology of this thing has zero bearing on those problems.
I believe that the more populist layer of the www became social media apps. Hosted LLMs (claude, chatGPT etc) are going to become the popular source of information and therefore narrative. What
we must remember is that we should retain control of our thoughts, and be aware of how we can share them without financially
interested parties claiming rights to their use or abuse. I am trying to solve some of these problems with NoteSub App - https://apps.apple.com/gb/app/notesub/id6742334239 - but have yet to overcome the real issue of how we can stop the middleman keeping the loop closed with him in between.
I believe the corporation for public broadcasting should provide funding for local member stations who run their own nodes on fediverse sites, and then federate those nodes together
> What specific pain point are you solving that keeps people on WhatsApp despite the surveillance risk, or on X despite the white supremacy?
Why wouldn't a genuinely open social web allow people to communicate content that Ben Werdmuller thinks constitutes white supremacy, just as one can on X? Ideas and opinions that Ben Werdmuller (and people with similar activist politics to him) think constitute white supremacy are very popular among huge segments of the English-speaking public, and if it's even possible for some moderator with politics like Werdmuller to prevent these messages from being promulgated (as was the case at Twitter until Musk bought it in 2022 and fired all the Trust and Safety people with politics similar to Werdmuller's), then it is not meaningfully open. If this is not possible, then would people with Werdmuller's politics still want to use an open social web, rather than a closed social web that lets moderators attempt to suppress content they deem white supremacist?
> As I was writing this talk, an entire apartment building in Chicago was raided. Adults were separated into trucks based on race, regardless of their citizenship status. Children were zip tied to each other.
> And we are at the foothills of this. Every week, it ratchets up. Every week, there’s something new. Every week, there’s a new restrictive social media policy or a news outlet disappears, removing our ability to accurately learn about what’s happening around us.
The reaction to the raid of that apartment building in Chicago on many social media platforms was the specific meme-phrase "this is what I voted for", and indeed Donald Trump openly ran on doing this, and won the US presidential election. What prevents someone from using open social media tech to call for going harder on deportations, or to spread news stories about violent crimes and fraud committed by immigrants? If anything can prevent this, how can the platform be said to be actually open?
Why was this chosen to be a keynote? This talk seems to not care about open social media, but rather that existing social media sites don't follow the author's political agenda. Having a keynote trying to rally people into building sites that support a niche political agenda that the general public doesn't agree with doesn't accomplish the goals of making open social media more viable. This along with equating things with "Nazis" just further alienates people.
I read this comment, went back to the article, and then came back to this comment. I have no idea what niche political agenda you're talking about- the message of the article is basically "solve problems your users are actually facing, not problems you think they have".
You can apply the concepts the author talks about to _literally_ any group that would make use of social media.
---
We all know about Twitter acquirer Elon Musk, who bent the platform to fit his political worldview. But he’s not alone.
Here’s Microsoft CEO Satya Nadella, owner of LinkedIn, who contributed a million dollars to Trump’s inauguration fund.
Here’s Mark Zuckerberg, who owns Threads, Facebook, Instagram, and WhatsApp, who said that he feels optimistic about the new administration’s agenda.
And here’s Larry Ellison, who will control TikTok in the US, who was a major investor in Trump, and who one advisor called, in a WIRED interview, the shadow President of the United States.
Social media is very quickly becoming aligned with a state that in itself is becoming increasingly authoritarian.
---
This was the real why. When control amasses to the few we end up in a place where there is a dissonance between what we perceive to be true and what is actually true. The voice of the dictator will say one thing but the people's lived experience will say something else. I don't think mastodon or Bluesky or even Jack Dorsey's new project Bitchat solves any of this. It goes much deeper. It is ideological. It is values driven. The outcome is ultimately decided by the motives of the people who start it or run it. I just don't think any western driven values can be the basis of a new platform because a large majority of the world are not from the west. For better or worse, you have the platforms of the west. They are US centric and they will dominate. Anything grassroots and fundamentally opposed to that will not come from the west. It must come authentically from those who need it.
80% of the time social media is just increasingly bad slop created to generate clicks/views/engagement. 10% is people screaming into the void, the last 10% might be valuable content. The Internet was better when we all hanged around in big and small web forums and group chats. To this day the most interesting conversations I see and participate in happens in forums and on group chats.
I notice that on Discord, even in "servers" (which aren't servers) that are allegedly about technical topics, it seems like at least 3 out of every 4 messages are slop - low effort irrelevant memes etc. For example there's a cat gif labeled "repost this cat after a substantial delay" and people just post that for no reason, then other people reply with the same gif. And there's no algorithm in Discord - it's an IRC-style chatroom platform - it's real humans posting and engaging with slop because it triggers dopamine or something, somehow.
The name killed it. If you know what it means, it doesn't bear any relevance to social media. If you don't know what it means, it sounds like a gastric disorder.
> Social media is very quickly becoming aligned with a state that in itself is becoming increasingly authoritarian.
Did this guy complain back when pre-Musk Twitter was fully in bed with the state? Or was he okay with it because that authoritarian relationship was on the right side of history?
1) periodjet guy seems to think "social media" is only twitter.
2) he also has no understanding of what "authoritarian" means. it's not just a word to be dismissive of things one doesn't like.
cue links to the nothing burger that is "Twitter Files" lol
Social media is simply an extension from cybernetics to the principles of cog-sci as a "protocol" network where status and control are the primary forces mediated. This is irrefutable - the web was built as an extension of the cog-sci parameters of information as control.
Social media can't be saved, it can only be revolutionary as a development arena for a new form of language.
"The subject of integration was socialization; the subject of coordination was communication. Both were part of the theme of control...Cybernetics dispensed with the need for biological organisms, it as the parent to cognitive science, where the social is theorized strictly in terms of the exchange of information. Receivers, senses of signs need to be known in terms of channels, capacities, error rates, frequencies and so forth." Haraway Primate Visions.
I don't understand how technologists and coders can be this naive to the ramifications of electronically externalizing signals which start as arbitrary in person, and then clearly spiral out of control once accelerated and cut-off from the initial conditions.
Social media relies on our dead. arbitrary signaling system, language, which once it's accelerated becomes a cybernetic/cog-sci control network, no matter how it's operated. Language is about control, status and bias before it's an attempt to communicate information. It's doomed as an external system in arbitrary symbols.
[+] [-] johnnyanmac|5 months ago|reply
But I feel that talking about the open social web without addressing the reasons current ones aren't popular/get blocked doesn't lead to much progress. Ultimately, big problems with an open social web include:
- moderation
- spam, which now includes scrapers bringing your site to a crawl
- good faith verification
- posting transparency
These are all hard problems and it seems to make me believe the future of a proper community lies more in charging a small premium. Even charging one dollar for life takes out 99% of spam and gives a cost to bad faith actors should they be banned and need another dollar to re-enter. Thus, easing moderation needs. But charging money for anything online these days can cause a lot of friction.
[+] [-] _heimdall|5 months ago|reply
A simple chronological feed of content from feeds I chose to follow is enough. I do have to take on the challenge of finding new content sources, but at least fore that's a worthwhile tradeoff to not be inundated with spam and to not feel dependent on someone else to moderate what I see.
[+] [-] emseetech|5 months ago|reply
I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.
[+] [-] isodev|5 months ago|reply
Contrast this to the “medias” like Threads, Bluesky, etc - moderation becomes impossible just because of the sheer scale of it all. Somehow everyone feels compelled to “correct someone who is wrong” or voice an opinion even when the context does not invite one. This is just a recipe for “perpetual engagement”, not actual platform for social interaction (networking).
[+] [-] AnthonyMouse|5 months ago|reply
These two seem like the same problem:
> moderation
> spam
You need some way of distinguishing high quality from low quality posts. But we kind of already have that. Make likes public (what else are they even for?). Then show people posts from the people they follow or that the people they follow liked. Have a dislike button so that if you follow someone but always dislike the things they like, your client learns you don't want to see the things they like.
Now you don't see trash unless you follow people who like trash, and then whose fault is that?
> which now includes scrapers bringing your site to a crawl
This is a completely independent problem from spam. It's also something decentralized networks are actually good at. If more devices are requesting some data then there are more sources of it. Let the bots get the data from each other. Track share ratios so high traffic nodes with bad ratios get banned for leeching and it's cheaper for them to get a cloud node somewhere with cheap bandwidth and actually upload than to buy residential proxies to fight bans.
> good faith verification
> posting transparency
It's not clear what these are but they sound like kind of the same thing again and in particular they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see instead of a bunch of spam from anons that nobody they follow likes.
[+] [-] Seattle3503|5 months ago|reply
It would let you identify users uniquely, but without revealing too much sensitive information. It would let you verify things like "This user has a Michigan driver's license, and they have an ID 1234, which is unique to my system and not linkable to any other place they use that ID."
If you ban that user, they wouldn't be able to use that ID again with you.
The alternative is that we continue to let unelected private operators like Cloudflare "solve" this problem.
[+] [-] XorNot|5 months ago|reply
The one principle benefit it has is it provides a resource stream to fund other things - but that's kind of it, because the other assumptions essentially assume there are no monetary thresholds in the system - i.e that spammers and botnets rely on volume and thus can be priced out of the market.
And to an extent it's true - but there's a threshold problem. How many accounts do you need to pay for in order to seize narrative control of a space, or to effect a takeover by seizing positions of power like moderation positions?
People I think like to think "hundreds or thousands" - but at least temporally this is well within reach of motivated threat actors (i.e. consider bot farms which are literally just racks of stripped down actual smart phones being puppeted - that is not a trivial investment), but the other side of it is - it's probably closer to like, 10 relatively active personalities.
[+] [-] BrenBarn|5 months ago|reply
[+] [-] CaptainOfCoit|5 months ago|reply
What do you mean with "now"? If you've ever been in a competitive industry before, you're already used to the random DDoS, and if you've published a moderately successful website, you've dealt with misbehaving scrapers/user-agents too, like the ones that get stuck and get requesting random 100 URLs per second for weeks.
I'm guessing you're alluding to AI scrapers, but are they really that different from what we've already learnt to deal with on the public internet?
[+] [-] AaronAPU|5 months ago|reply
[+] [-] numpy-thagoras|5 months ago|reply
- spam, which now includes scrapers bringing your site to a crawl
- good faith verification
- posting transparency"
And we have to think about how to hit these targets while:
- respecting individual sovereignty
- respecting privacy
- meeting any other obligations or responsibilities within reason
and of course, it must be EASY and dead simple to use.
It's doable, we've done far more impossible-seeming things just in the last 30 years, so it's just a matter of willpower now.
[+] [-] Moru|5 months ago|reply
[+] [-] echelon|5 months ago|reply
And then if you could verify you'd paid it in a completely P2P decentralized fashion.
I'm not a crypto fan, but I'd appreciate a message graph where high signal messages "burned" or "donated money" to be flagged for attention.
I'd also like it if my attention were paid for by those wishing to have it, but that's a separate problem.
[+] [-] guerrilla|5 months ago|reply
[+] [-] koolala|5 months ago|reply
[+] [-] jernst|5 months ago|reply
[+] [-] xitss|5 months ago|reply
We seem to be facing a similar balance today, only in digital context and on a global scale. Human nature being what it is, no system will ever be perfect. What we can do is build safeguards (structures that empower users and raise the cost of abuse) so that bad actors must find diminishing returns for bad behavior.
[+] [-] next_xibalba|5 months ago|reply
[+] [-] VonGuard|5 months ago|reply
[+] [-] mactavish88|5 months ago|reply
It's way easier to censor a decentralized social network if the majority of its nodes run on AWS, GCP and Azure, for instance.
What'd be great is if we could run these networks primarily from our personal devices (i.e. true edge computing), but the more the computing's pushed to the edge the harder it becomes to implement technically and socially.
[+] [-] jasonvorhe|5 months ago|reply
https://github.com/greenart7c3/Citrine
[+] [-] hn-ifs|5 months ago|reply
I skimmed this article, I still don't get it. I think group chats cover most of what the author is taking about, public and private ones. But this might be my lack of imagination. I feel there article, and by extension, the talk could have been a lot shorter.
[+] [-] johnnyanmac|5 months ago|reply
But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.
Most of the article here is focused more on making sure any social media (be it chats, a public forum, or email) isn't hijacked by vested powers who want to spread propaganda or drown the user in ads. One approach to that focused in this article is decentralization, which gives a user the ability to take their ball and go home.
Of course, it's futile if the user doesn't care about wielding that power.
[+] [-] Gigachad|5 months ago|reply
[+] [-] noman-land|5 months ago|reply
[+] [-] Animats|5 months ago|reply
The site's marketing is geared towards collecting donations in the US$20,000 and up range. That doesn't scale. They don't have viewer counts big enough to make it on payments in the $10/year range. So that doesn't scale either.
The back-end technology of this thing has zero bearing on those problems.
[1] https://19thnews.org/sponsorship/
[+] [-] zahirbmirza|5 months ago|reply
[+] [-] vinceleo|5 months ago|reply
Elgg has ActivityPub plugin now https://elgg.org/plugins/3330966
[+] [-] redbell|5 months ago|reply
[+] [-] shortrounddev2|5 months ago|reply
[+] [-] JuniperMesos|5 months ago|reply
Why wouldn't a genuinely open social web allow people to communicate content that Ben Werdmuller thinks constitutes white supremacy, just as one can on X? Ideas and opinions that Ben Werdmuller (and people with similar activist politics to him) think constitute white supremacy are very popular among huge segments of the English-speaking public, and if it's even possible for some moderator with politics like Werdmuller to prevent these messages from being promulgated (as was the case at Twitter until Musk bought it in 2022 and fired all the Trust and Safety people with politics similar to Werdmuller's), then it is not meaningfully open. If this is not possible, then would people with Werdmuller's politics still want to use an open social web, rather than a closed social web that lets moderators attempt to suppress content they deem white supremacist?
> As I was writing this talk, an entire apartment building in Chicago was raided. Adults were separated into trucks based on race, regardless of their citizenship status. Children were zip tied to each other.
> And we are at the foothills of this. Every week, it ratchets up. Every week, there’s something new. Every week, there’s a new restrictive social media policy or a news outlet disappears, removing our ability to accurately learn about what’s happening around us.
The reaction to the raid of that apartment building in Chicago on many social media platforms was the specific meme-phrase "this is what I voted for", and indeed Donald Trump openly ran on doing this, and won the US presidential election. What prevents someone from using open social media tech to call for going harder on deportations, or to spread news stories about violent crimes and fraud committed by immigrants? If anything can prevent this, how can the platform be said to be actually open?
[+] [-] charcircuit|5 months ago|reply
[+] [-] stormbeard|5 months ago|reply
You can apply the concepts the author talks about to _literally_ any group that would make use of social media.
[+] [-] saubeidl|5 months ago|reply
[+] [-] asim|5 months ago|reply
Here’s Microsoft CEO Satya Nadella, owner of LinkedIn, who contributed a million dollars to Trump’s inauguration fund.
Here’s Mark Zuckerberg, who owns Threads, Facebook, Instagram, and WhatsApp, who said that he feels optimistic about the new administration’s agenda.
And here’s Larry Ellison, who will control TikTok in the US, who was a major investor in Trump, and who one advisor called, in a WIRED interview, the shadow President of the United States.
Social media is very quickly becoming aligned with a state that in itself is becoming increasingly authoritarian. ---
This was the real why. When control amasses to the few we end up in a place where there is a dissonance between what we perceive to be true and what is actually true. The voice of the dictator will say one thing but the people's lived experience will say something else. I don't think mastodon or Bluesky or even Jack Dorsey's new project Bitchat solves any of this. It goes much deeper. It is ideological. It is values driven. The outcome is ultimately decided by the motives of the people who start it or run it. I just don't think any western driven values can be the basis of a new platform because a large majority of the world are not from the west. For better or worse, you have the platforms of the west. They are US centric and they will dominate. Anything grassroots and fundamentally opposed to that will not come from the west. It must come authentically from those who need it.
[+] [-] Thlom|5 months ago|reply
[+] [-] immibis|5 months ago|reply
[+] [-] brcmthrowaway|5 months ago|reply
[+] [-] SoftTalker|5 months ago|reply
[+] [-] periodjet|5 months ago|reply
Did this guy complain back when pre-Musk Twitter was fully in bed with the state? Or was he okay with it because that authoritarian relationship was on the right side of history?
[+] [-] kuschkufan|5 months ago|reply
cue links to the nothing burger that is "Twitter Files" lol
[+] [-] marshfram|5 months ago|reply
Social media can't be saved, it can only be revolutionary as a development arena for a new form of language.
"The subject of integration was socialization; the subject of coordination was communication. Both were part of the theme of control...Cybernetics dispensed with the need for biological organisms, it as the parent to cognitive science, where the social is theorized strictly in terms of the exchange of information. Receivers, senses of signs need to be known in terms of channels, capacities, error rates, frequencies and so forth." Haraway Primate Visions.
I don't understand how technologists and coders can be this naive to the ramifications of electronically externalizing signals which start as arbitrary in person, and then clearly spiral out of control once accelerated and cut-off from the initial conditions.
[+] [-] dash2|5 months ago|reply
[+] [-] marshfram|5 months ago|reply