top | item 40724552

(no title)

kn0where | 1 year ago

Genuine question: is there any technological/scaling problem with Mastodon that actually needs solving, and that the AT protocol would fix? What reason is there to split different parts of their backend tech stack between different “operators” with different policies? Does this separation of concerns improve the performance (or other desirable attribute) of the decentralized network versus Mastodon’s approach?

discuss

order

jrm4|1 year ago

I asked this same question to someone on bluesky who worked for it, and I didn't get a remotely coherent answer. If anything, it feels like the "identity portability" actually sets you up for the exact same kind of centralization we're trying to get away from.

While I get that, "lots of servers and easy to create and destroy accounts" may not be the best for adoption/popularity, it definitely feels like the long run smartest way to go about it.

criddell|1 year ago

The one I hear about most often is account portability. Your Mastodon account is on a specific server whereas the AT protocol account is owned by the user. You can move it wherever you want. It helps protected you from being banned or from losing your account if your server is shut down.

Somebody archived a thread that discusses some of the motivation for AT:

https://fedimeister.onyxbits.de/topics/thread-archive-about-...

jrm4|1 year ago

This feels like a decent idea that will never actually work (or matter) for most people, kind of like self-signed PGP emails and such?

They're solving an unreal problem that can be solved by "human trust" in a Mastodon-like system. Like, hey everyone, I'm no longer joebob@bluemastodon, I'm joebob@purplemastodon.

abdullahkhalids|1 year ago

Isn't Mastodon supposed to not-scale? Basically, every server should be small enough that it can be reasonably moderated manually. If Mastodon scales to millions of users, we will be back to needing automated methods for moderation which have huge positive and negative errors.

Mastodon would likely make adoption much easier if these install [1] instructions were replaced by `apt-get install mastodon`, and configuration [2] was done in UI, rather than some text file.

Even that is too difficult. Hosting a mastodon instance should be as technically easy as starting a discord server.

[1] https://docs.joinmastodon.org/admin/install/

[2] https://docs.joinmastodon.org/admin/config/

tkellogg|1 year ago

There's 2 ways mastodon can scale (or not):

1. Per-server

2. Network-wide

Moderation load is per-server scaling, mostly. I'd argue that if the load gets to be too much, moderators do less moderating and people decide to migrate to other servers. That's kind of a clean scaling strategy, tbh.

However, adding servers isn't a great story. There's n^2 network connections (worst case) that need to be made to service all subscriptions. That's definitely a scaling problem, although probably addressable via an architecture inspired by gossip protocols.

kstrauser|1 year ago

There's a certain amount of irreducible complexity. There are a lot of things to configure! For instance, what database server do I want to connect to? What mailserver to use for transactional messages, and how do I authenticate to it? Where do I store images? Which Redis do I point at?

Someone who wants to avoid all that can register with a hosted service like masto.host that does it all for them.

josephg|1 year ago

The feature I’ve heard the Bluesky devs talk a lot about is aggregating likes across multiple servers. And search. As far as I can tell, they want Bluesky to be as fast and scalable as Twitter while being decentralised.

If a celebrity with millions of followers uses Bluesky, all users across all servers should be able to follow them and like their tweets.

jauntywundrkind|1 year ago

Brian Newbold has some posts contrasting the design decisions of AP vs Bsky, in reply to this EFF article,

> a design tradeoff between atproto appviews and mastodon instances is that atproto appviews are generally "big world" and index the entire network, so the cost to run the indexing component scales with the size of network. (there is also read-volume scaling, separately)

> in activitypub, the scaling needs of an instance roughly go with the number of users, and the volume of interactions they have with entire network (read and write). so instances can start small (fewer resources)

> the point I was getting at is that actually you can have an atproto appview which doesn't index the entire network, just PDS instance of interest.

> this has some "cheap and totally independent" benefits, but trades off against not being able to see and interact with entire network

> aka, "full" appviews are more resource-intensive to get started, but have huge value.

> (also, the protocol is designed to make it way way cheaper than web crawling or other trad indexing, both in terms of compute resources and in terms of admin/ops labor)

https://bsky.app/profile/bnewbold.net/post/3kva4vxi45s2q

Bsky's as a distributed distribution mechanism has yet to be tested in any major way, but the premise of bulk transmission seems more built-in where-as AP was crafted more as a person-to-person relationship.

I'm less familiar with Mastadon's own protocols. It does seem like trying to keep Mastadon's sidekiq egress queues from flooding is a damned hard job for a lot of growing instances, at least it was ~2 years ago when I was active there.