top | item 47120149

(no title)

CodeCompost | 8 days ago

I briefly hosted a Lemmy server on my machine just to see how it works and my god never again. The pictures that were automatically synced to my machine did not only make me lose faith in humanity, but it made me shut down and wipe my machine immediately because I was terrified that some of those images would land me some serious jail time.

So if you choose to host something like this, be very aware that there are some sick, sick people out there.

discuss

order

rapnie|7 days ago

This has nothing to do with Lemmy, but more with any social media that is just open to the general public. Ask the moderator teams of Facebook, what they encounter day to day. Many of these poor folks work in shitty job conditions and burn out leaving with PTSD.

If you spin up a fediverse app like Lemmy, you spin up a platform. It is platform software. And you get the responsibility, but also the opportunity, to set that up well. Curate the content in your instance. Lemmy and any other fediverse apps comes with a set of moderation tools that allow you to handle this, and there is a strong focus in the developer community to improve them on a continual basis.

WD-42|7 days ago

This is a huge ask. Most of us are just nerds that find the technical aspects interesting, a hobby during our spare time.

ehnto|7 days ago

It's a good time to mention Safe Harbor laws, because not every country has them and so not every person can host something like this without taking on personal liability for what travels through or rests on the "platform".

pousada|7 days ago

> Curate the content in your instance

How do i do that without getting PTSD as well? Or is there some magic method that works without me looking at CSAM and gore constantly?

balamatom|7 days ago

What's fucked up is that entities like Meta and OpenAI are likely to already have tons of "other people's snuff" in their datastores. Yet they're not the ones at risk of being swatted; individual rebroadcasters are.

Even though you want nothing to do with those images in the first place, while Big Social is intentionally keeping the stuff around "for science", yeah right.

Consider how some Muslim cultures have sidestepped this issue by banning representational imagery altogether; while the Russians just sent telegrams.

mghackerlady|7 days ago

As much as I try to avoid AI hype, this truly seems like one of the best uses of image recognition tech

lmf4lol|7 days ago

how do you pay for that?

damnesian|7 days ago

This reality alone has made me severely curtail my own social media use and reach. I really only care about a handful of forums attended by (at least... seemingly) people who actually care to think, or have some basic intact humanity and want to converse.

So despite the fact I am very interested in the federated social media to keep my intelletual property out of the cashflow of businesses whose actions are much louder than their pretty sounds in court, it's still one-shot-and-out digital graffiti. I don't think it's worth it.

antonyh|7 days ago

This was why I canned a potentially useful image project a long time ago that could resize and manipulate images from any URL to optimise for mobile use. It's also why I've not dipped my toes into the murky pool of self-hosting any of this and rather use services moderated by someone else. It's just too toxic to handle, and dangerous to my career, and I don't know how I'd contain it beyond never hosting ANY image data and making it text only.

scotty79|7 days ago

I think the only way to host social services is so that any free form content that touches your servers is encrypted with a key you don't have.

pjc50|8 days ago

.. ah, yes, "completely unmoderated free speech system that supports images" does mean "may contain CSAM". Heck, even Instagram had a horrific "mirror world" incident where the moderation bit got flipped on a number of images which ordinary users were exposed to.

I wouldn't run any kind of publishing system for anons myself. It's potentially valuable for an actual social group though.

sharperguy|7 days ago

I've been hearing talk for years about a "web of trust" system, that could filter spam simply by having users vouch for eachother and filtering out anyone not vouched for. However, I haven't seen a function system based on this model yet.

Personally I'd love to add in something like the old slashdot comment model, where people would mark content as "helpful", "funny", "insightful", "controversial" etc, and based on how much you trust the people labeling it, you could have things filtered out, or brought forward.

balamatom|8 days ago

>I wouldn't run any kind of publishing system for anons myself. It's potentially valuable for an actual social group though.

That's pretty much how it works on the federated Internet.

There are large open-access services run by communities with sufficient moderation capacity (to not get themselves nuked, anyway.) Turns out many "impossibilities" are trivial when you're not trying to abuse 1 billion active users at the same time through the power of their own (distr)actions - but instead you are simply trying to run a board for messages.

And then there plenty of private servers, where publishing either is by invite, or does not have outsized reach in the first place. Those also defederate each other a lot, and many don't show you stuff from the big publics at all.

There've been "bad people out there" always (or at least that's what the "good people in there" have been broadcasting, for about as long as I remember). The design/engineering problem here is how to figure out and deploy a relational dynamic that keeps hostiles at a safe distance.

The practical problem stems from a technicality of how federation currently works: to display content from other services to your users, you have to mirror it on your storage.

This mode of federating hazardous data is a real problem, and also it's exactly what some cheap-ass subcontractor of current-gen social media incumbents would be doing if said incumbents had the amount of good sense that they've demonstrated having (see e.g. https://erinkissane.com/meta-in-myanmar-full-series). Yeah cuz... it's war out there.

I don't expect things to get better until everyone's phone is their personal server and cryptographic root of trust, and this is exposed to non-technicals in a way which neither scares them nor screws them over. Once civilization accomplishes that, I reckon things will be fine once again.

EDIT: "Heck, even Instagram had a horrific "mirror world" incident where the moderation bit got flipped on a number of images which ordinary users were exposed to." I don't think I've heard about this before, but I must admit I find it completely hilarious - besides obviously sad and horrifying.

EasyMark|7 days ago

yep text is bad enough, screw hosting videos and images from randos on the web. I would 100% host a forum or similar if the honor system worked, but it only takes a couple gooner CSAM deviants to ruin your entire life on something like that and you wouldn't know what happened until the gov showed up on your doorstep

ajsnigrutin|7 days ago

I mean... reddit also defended that.

https://www.bbc.com/news/technology-19975375

> Social news site Reddit will not censor "distasteful" sections of its website, its chief executive has said.

jailbait, upskirt, etc. were all huge subreddits back then.