top | item 44833834

Linear sent me down a local-first rabbit hole

467 points| jcusch | 6 months ago |bytemash.net

218 comments

order

aboodman|6 months ago

> Using Zero is another option, it has many similarities to Electric, while also directly supporting mutations.

The core differentiator of Zero is actually query-driven sync. We apparently need to make this more clear.

You build your app out of queries. You don't have to decide or configure what to sync up front. You can sync as much, or as little as you want, just by deciding which queries to run.

If Zero does not have the data that it needs on the client, queries automatically fall back to the server. Then that data is synced, and available for next query.

This ends up being really useful for:

- Any reasonably sized app. You can't sync all data to client.

- Fast startup. Most apps have publicly visible views that they want to load fast.

- Permissions. Zero doesn't require you to express your permissions in some separate system, you just use queries.

So the experience of using Zero is actually much closer to a reactive db, something like Convex or RethinkDB ().

Except that it uses standard Postgres, and you also get the instant interactions of a sync engine.

Cassandra99|6 months ago

I developed an open-source task management software based on CRDT with a local-first approach. The motivation was that I primarily manage personal tasks without needing collaboration features, and tools like Linear are overly complex for my use case.

This architecture offers several advantages:

1. Data is stored locally, resulting in extremely fast software response times 2. Supports convenient full database export and import 3. Server-side logic is lightweight, requiring minimal performance overhead and development complexity, with all business logic implemented on the client 4. Simplified feature development, requiring only local logic operations

There are also some limitations:

1. Only suitable for text data storage; object storage services are recommended for images and large files 2. Synchronization-related code requires extra caution in development, as bugs could have serious consequences 3. Implementing collaborative features with end-to-end encryption is relatively complex

The technical architecture is designed as follows:

1. Built on the Loro CRDT open-source library, allowing me to focus on business logic development

2. Data processing flow: User operations trigger CRDT model updates, which export JSON state to update the UI. Simultaneously, data is written to the local database and synchronized with the server.

3. The local storage layer is abstracted through three unified interfaces (list, save, read), using platform-appropriate storage solutions: IndexedDB for browsers, file system for Electron desktop, and Capacitor Filesystem for iOS and Android.

4. Implemented end-to-end encryption and incremental synchronization. Before syncing, the system calculates differences based on server and client versions, encrypts data using AES before uploading. The server maintains a base version with its content and incremental patches between versions. When accumulated patches reach a certain size, the system uploads an encrypted full database as the new base version, keeping subsequent patches lightweight.

If you're interested in this project, please visit https://github.com/hamsterbase/tasks

bob1029|6 months ago

I'm all-in on SSR. The client shouldn't have any state other than the session token, current URL and DOM.

Networks and servers will only get faster. Speed of light is constant, but we aren't even using its full capabilities right now. Hollow core fiber promises upward of 30% reduction in latency for everyone using the internet. There are RF-based solutions that provide some of this promise today. Even ignoring a wild RTT of 500ms, a SSR page rendered in 16ms would feel relatively instantaneous next to any of the mainstream web properties online today if delivered on that connection.

I propose that there is little justification to take longer than a 60hz frame to render a client's HTML response on the server. A Zen5 core can serialize something like 30-40 megabytes of JSON in this timeframe. From the server's perspective, this is all just a really fancy UTF-8 string. You should be measuring this stuff in microseconds, not milliseconds. The transport delay being "high" is not a good excuse to get lazy with CPU time. Using SQLite is the easiest way I've found to get out of millisecond jail. Any hosted SQL provider is like a ball & chain when you want to get under 1ms.

There are even browser standards that can mitigate some of the navigation delay concerns:

https://developer.mozilla.org/en-US/docs/Web/API/Speculation...

random3|6 months ago

> networks and servers are will only get faster

this isn't an argument for SSR. In fact there's hardly a universal argument for SSR. You're thinking of a specific use-case where there's more compute capacity on the server, where logic can't be easily split, etc. There are plenty of examples that make the client-side rendering faster.

Rendering logic can be disproportionately complex relative to the data size. Moreover, client resources may actually be larger in aggregate than sever. If SSR would be the only reasonable game in we wouldn't have excitement around Web Assembly.

Also take a look at the local-computation post https://news.ycombinator.com/item?id=44833834

The reality is that you can't know which one is better and you should be able to decide at request time.

TimTheTinker|6 months ago

If you could simply drop in a library to any of your existing SSR apps that:

- is 50kb (gzipped)

- requires no further changes required from you (either now or in the future)

- enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation

would you do it?

The problem I see with SSR evangelism is that it assumes that compromising that one use case (offline/low bandwidth use of the app) is necessary to achieve developer happiness and a good UX. And in some cases (like this) it goes on to justify that compromise with promises of future network improvements.

The fact is, low bandwidth requirement will always be a valuable feature, no matter the context. It's especially valuable to people in third-world countries, in remote locations, or being served by Comcast (note I'm being a little sarcastic with that last one).

whizzter|6 months ago

RightToolForTheRightJob!

Would you try to write/work on a collaboratibe text document (ie Google Docs or Sheets?) by editing a paragraph/sentence that's server side rendered and hope nobody changes the paragraph mid-work because the developers insisted on SSR ?

These kinds of tools (Docs, Sheets, Figma, Linear,etc) work well because changes have little impact but conflict resolution is better avoided by users noticing that someone else is working on it and hopefully just get realtime updates.

Then again, hotel booking or similar has no need for something like that.

Then there's been middle-ground like an enterprise logistics app that had some badly YOLO'd syncing, it kinda needed some of it but there was no upfront planning and it took a time to retrofit a sane design since there was so much domain and system specifics things lurking with surprises.

b_e_n_t_o_n|6 months ago

This is called happy-path engineering, and it's really frustrating for people who don't live on the happy path.

packetlost|6 months ago

Latency is additive, so all that copper coax that and mux/demux in between a sizeable chunk of Americans and the rest of the internet means you're looking at a minimum roundtrip latency of 30ms if server is in the same city. Most users are also on Wi-Fi which adds and additional mux/demux + rebroadcast step that adds even more. And most people do not have the latest CPU. Not to mention mobile users over LTE.

Sorry, but this is 100% a case of privileged developers thinking their compute infrastruction situation generalizes: it doesn't and it is a mistake to take shortcuts that assume as such.

SJC_Hacker|6 months ago

The use case for SSR now and in the future is on initial page load, especially on mobile.

After that, with competent engineering everything should be faster on the client, since it only needs state updates, not a complete re-render

If you don't have competent engineering, SSR isn't going to save you

petralithic|6 months ago

ElectricSQL and TanStack DB are great, but I wonder why they focus so much on local first for the web over other platforms, as in, I see mobile being the primary local first use case since you may not always have internet. In contrast, typically if you're using a web browser to any capacity, you'll have internet.

Also the former technologies are local first in theory but without conflict resolution they can break down easily. This has been from my experience making mobile apps that need to be local first, which led me to using CRDTs for that use case.

jitl|6 months ago

Because building local first with web technologies is like infinity harder than building local first with native app toolkits.

Native app is installed and available offline by default. Website needs a bunch of weird shenanigans to use AppManifest or ServiceWorker which is more like a bunch of parts you can maybe use to build available offline.

Native apps can just… make files, read and write from files with whatever 30 year old C code, and the files will be there on your storage. Web you have to fuck around with IndexedDB (total pain in the ass), localStorage (completely insufficient for any serious scale, will drop concurrent writes), or OriginPrivateFileSystem. User needs to visit regularly (at least once a month?) or Apple will erase all the local browser state. You can use JavaScript or hit C code with a wrench until it builds for WASM w/ Emscripten, and even then struggle to make sync C deal with waiting on async web APIs.

Apple has offered CoreData + CloudKit since 2015, a completed first party solution for local apps that sync, no backend required. I’m not a Google enthusiast, maybe Firebase is their equivalent? Idk.

aboodman|6 months ago

I think this is a fascinating and deep question, that I ponder often.

I don't feel like I know all the answers, but as the creator of Replicache and Zero here is why I feel a pull to the web and not mobile:

- All the normal reasons the web is great – short feedback loop, no gatekeepers, etc. I just prefer to build for the web.

- The web is where desktop/productivity software happens. I want productivity software that is instant. The web has many, many advantages and is the undisputed home of desktop software now, but ever since we went to the web the interaction performance has tanked. The reason is because all software (including desktop) is client/server now and the latency shows up all over the place. I want to fix that, in particular.

- These systems require deep smarts on the client – they are essentially distributed databases, and they need to run that engine client-side. So there is the question of what language to implement in. You would think that C++/Rust -> WASM would be obvious but there are really significant downsides that pull people to doing more and more in the client's native language. So you often feel like you need to choose one of those native languages to start with. JS has the most reach. It's most at home on the desktop web, but it also reaches mobile via RN.

- For the same reason as prev, the complex productivity apps that are often targeted by sync engines are often themselves written using mainly web tech on mobile. Because they are complex client-side systems and they need to pick a single impl language.

bhl|6 months ago

Mobile has really strong offline-primitives compared to the web.

But the web is primarily where a lot of productivity and collaboration happens; it’s also a more adversarial environment. Syncing state between tabs; dealing with storage eviction. That’s why local first is mostly web based.

swsieber|6 months ago

I think the current crop of sync engines greatly benefit from being web-first because they are still young and getting lots of updates. And mobile updates are a huge pain compared to webapp updates.

The PWA capabilities of webapps are pretty OK at this point. You can even drive notifications from the iOS pinned PWA apps, so personally, I get all I need from web apps pretending to be mobile apps.

owebmaster|6 months ago

Because web apps run in a web browser, which is the opposite of a local first platform.

Local-first is actually the default in any native app

946789987649|6 months ago

In this case it's not about being able to use the product at all, but the joy from using an incredibly fast and responsive product, which therefore you want to use local-first.

jeremy_k|6 months ago

Not a lot of mention for the collaboration aspect that local first / sync engines enabled. I've been building a project using Zero that is meant to replace a Google Sheet a friend of mine uses for his business. He routinely gets on a Google Meet with a client, they both open the Sheet and then go through the data.

Before the emergence of tools like Zero I wouldn't have ever considered attempting to recreate the experience of a Google Sheet in a web app. I've previously built many live updating UIs using web sockets but managing that incoming data and applying it to the right area in the UI is not trivial. Take that and multiply it by 1000 cells in a Sheet (which is the wrong approach anyway, but it's what I knew how to build) and I can only imagine the mess of code.

Now with Zero, I write a query to select the data and a mutator to change the data and everything syncs to anyone viewing the page. It is a pleasure to work with and I enjoy building the application rather than sweating dealing with applying incoming hyper specific data changes.

blixt|6 months ago

I've been very impressed by Jazz -- it enables great DX (you're mostly writing sync, imperative code) and great UX (everything feels instant, you can work offline, etc).

Main problems I have are related to distribution and longevity -- as the article mentions, it only grows in data (which is not a big deal if most clients don't have to see that), and another thing I think is more important is that it's lacking good solutions for public indexes that change very often (you can in theory have a public readable list of ids). However, I recently spoke with Anselm, who said these things have solutions in the works.

All in all local-first benefits often come with a lot of costs that are not critical to most use cases (such as the need for much more state). But if Jazz figures out the main weaknesses it has compared to traditional central server solutions, it's basically a very good replacement for something like Firebase's Firestore in just about every regard.

ChadNauseam|6 months ago

Yeah, Jazz is amazing. The DX is unmatched. My issue when I used it was, they mainly supported passkey-based encryption, which was poorly implemented on windows. That made it kind of a non-starter for me, although I'm sure they'll support traditional auth methods soon. But I love that it's end-to-end encrypted and it's super fun to use.

mentalgear|6 months ago

Local-First & Sync-Engines are the future. Here's a great filterable datatable overview of the local-first framework landscape: https://www.localfirst.fm/landscape

My favorite so far is Triplit.dev (which can also be combined with TanStack DB); 2 more I like to explore are PowerSync and NextGraph. Also, the recent LocalFirst Conf has some great videos, currently watching the NextGraph one (https://www.youtube.com/watch?v=gaadDmZWIzE).

CodingJeebus|6 months ago

How is the database migration support for these tools?

Needing to support clients that don’t phone home for an extended period and therefore need to be rolled forward from a really old schema state seems like a major hassle, but maybe I’m missing something. Trying to troubleshoot one-off front end bugs for a single product user can be real a pain, I’d hate to see what it’s like when you have to factor in the state of their schema as well

rogerkirkness|6 months ago

Reminds me of Meteor back in the day.

virgil_disgr4ce|6 months ago

Thank you for this, I'm going to have to check out Triplit. Have you tried InstantDB? It's the one I've been most interested in trying but haven't yet.

tbeseda|6 months ago

They're also the past...

10us|6 months ago

Man why arent couchdb / pouchdb not listed? Still works like a charm!

jFriedensreich|6 months ago

"Works like a charm" / "not listed" does not really do it justice, its much worse. All of the mentioned "solutions" will inevitably lose data on conflicts one way or another and i am not aware of anything from that school of thought, that has full control over conflict resolution as couchdb / pouchdb does. Apparently vibe coding crdt folks do not value data safety over some more developer ergonomics. It is an tradeoff to make for your own hobby projects if you are honest about it but i don't understand how this is just completely ignored in every single one of these posts.

JusticeJuice|6 months ago

I remember being literally 12 when google docs was launched, which featured real-time sync, and a collaborative cursor. I remember thinking that this is how all web experience will be in the future, at the time 'cloud computing' was the buzzword - I (incorrectly) thought realtime collaboration was the very definition of cloud computing.

And then it just... never happened. 20 years went by, and most web products are still CRUD experiences, such as this site included.

The funny thing is it feels like it's been on the verge of becoming mainstream for all this time. When meteor.js got popular I was really excited, and then with react surely it was gonna happen - but even now, it's still not the default choice for new software.

I'm still really excited to see it happen, and I do think it will happen eventually - it's just trickier than it looks, and it's tricky to make the tooling so cheap that it's worth it in all situations.

SJC_Hacker|6 months ago

Real-time collaboration ? Discord (not fundamentally different than IRC, which has been around since the 90s), Zoom (or any other teleconferencing software)

This site being a CRUD app is a feature. Sometimes simplicity is best. I wouldn't want realtime updates, too distracting.

levmiseri|6 months ago

I feel the same way. The initial magic of real-timeness felt like a glimpse into a future that... where is it?

I'm still excited about the prospects of it — shameless plug: actually building a tool with one-of-a-kind messaging experience that's truly real-time in the Google docs collaboration way (no compose box, no send button): https://kraa.io/hackernews

abandonliberty|6 months ago

The speed of light is rather unaccommodating.

We run into human-perceptible relativistic limits in latency. Light takes 56ms to travel half the earth's circumference, and our signals are often worse off. They don't travel in an idealized straight path, get converted to electrons and radio waves, and have to hop through more and more hoops like load balancers and DDOS protections.

In many cases latency is worse than it used to be.

mkarliner|6 months ago

Meteor was/is a very similar technology. And I did some fairly major projects with it.

mentalgear|6 months ago

Meteor was amazing, I don't understand why it never got sustainable traction.

vlasky|6 months ago

Meteor is alive and well and actively maintained. It just doesn't get attention for some reason. Version 3.3.1 was released 4 days ago.

sergioisidoro|6 months ago

I really like electric approach and it has been on my radar for a long time, because it just leaves the writing complexity to you and the API.

Most of the solutions with 2 way sync I see work great in simple rest and hobby "Todo app" projects. Start adding permissions and evolving business logic, migrations, growing product and such, and I can't see how they can hold up for very long.

Electric gives you the sync for reads with their "views", but all writes still happen normally through your existing api / rest / rpc. That also makes it a really nice tool to adopt in existing projects.

thruflo|6 months ago

> Electric’s approach is compelling given it works with existing Postgres databases. However, one gap remains to fill, how to handle mutations?

Just to note that, with TanStack DB, Electric now has first class support for local writes / write-path sync using transactional optimistic mutations:

https://electric-sql.com/blog/2025/07/29/local-first-sync-wi...

minikomi|6 months ago

My kingdom for a team organised by org mode files through a got repo

croes|6 months ago

But how is conflicting data handled?

For instance one closes an something and another aborts the same thing.

terencege|6 months ago

I'm also building a local first editor and rolling my own CRDTs. There are enormous challenges to make it work. For example the storage size issue mentioned in the blog, I end up using with yjs' approach which only increase the clock for upsertion, and for deletion remove the content and only remain deleted item ids which can be efficiently compressed since most ids are continuous.

jddj|6 months ago

In case you missed it and it's relevant, there was an automerge v3 announcement posted the other day here which claimed some nice compression numbers as well

CafeRacer|6 months ago

We're using dexie+rxjs. A killer combination.

Described here https://blog-doe.pages.dev/p/my-front-end-state-management-a...

I've already made improvements to that approach. decoupling of backend and front end actually feels like you're reducing complexity.

floydnoel|6 months ago

Are you using the cloud sync with Dexie? I built an app on it but it seems to have a hard time switching from local to cloud mode and vice versa. I’m not sure they ever thought people would want to but why bother making cloud set up calls for users that didn’t want it.

nicoritschel|6 months ago

I've been down this rabbit hole as well. Many of the sync projects seem great at first glance (and are very impressive technically) but perhaps a bit idealistic. Reactive queries are fantastic from a dx perspective, but any of the "real" databases running in the browser like sqlite or pglite store database pages in IndexedDB as there are some data longevity issues with OPFS (IIRC Safari aggressively purges this with a week of inactivity). Maybe the solution is just storing caches in the users' home directory with the filesystem api, like a native application.

Long story short, if requirements aren't strictly real time collaborative and online-enabled, I've found rolling something yourself more in the vein of a "fat client" works pretty well too for a nice performance boost. I generally prefer using IndexedDB directly— well via Dexie, which has reactive query support.

b_e_n_t_o_n|6 months ago

Local first is super interesting and absolutely needed - I think most of the bugs I run into with web apps have to do with sync, exacerbated by poor internet connectivity. The local properties don't interest me as much as request ordering and explicit transactions. You aren't guaranteed that requests resolve in order, and thus can result in a lot of inconsistencies. These local-first sync abstractions are a bit like bringing a bazooka to a water gun fight - it would be interesting to see some halfway approaches to this problem.

sturza|6 months ago

Local-first buys you instant UX by moving state to the client, and then makes everything else a little harder

CharlieDigital|6 months ago

    > instant UX
I do not get the hype. At all.

"Local first" and "instant UX" are the least of my concerns when it comes to project management. "Easy to find things" and "good visibility" are far more important. Such a weird thing to index on.

I might interact with the project management tool a few times a day. If I'm so frequently accessing it as an IC or an EM that "instant UX" becomes a selling point, then I'm doing something wrong with my day.

JamesSwift|6 months ago

Id say you are underreporting how much harder everything else becomes but yes, definitely agreed

captainregex|6 months ago

this is such a clean and articulate way of putting it. The discussion around here the last few days about local and the role it is going to play has been phenomenal and really genuine

antgiant|6 months ago

I’ve been working on a small browser app that is local first and have been trying to figure out how to pair it with static hosting. It feels like this should be possible but so far the tooling all seems stuck in the mindset of having a server somewhere.

My use case is scoring live events that may or may not have Internet connection. So normal usage is a single person but sometimes it would be nice to allow for multi person scoring without relying on centralized infrastructure.

chr15m|6 months ago

I was in the same boat and I found Nostr is a perfect fit. You can write a 100% client side no-server app and persist your data to relays.

Here's the app I built if you want to try it out: https://github.com/chr15m/watch-later

swsieber|6 months ago

Honestly, having used InstantDB (one of the providers listed in their post), I think it'd be a pretty nice fit.

I've been writing a budget app for my wife and I and I've made it 100% free with 3rd party hosting:

* InstantDB free tier allows 1 dev. That's the remote sync.

* Netlify for the static hosting

* Free private gitlab ci/cd for running some email notification polling, basically a poor mans hosted cron.

nchmy|6 months ago

Local first is fantastic. But something that I can't figure out is why the OG of local first, RxDB, never gets any love.

As far as I can tell, it's VASTLY more capable than all of these new options. It has full-text search, all sorts of query optimizations, different storage backends in both the browser and server, and more.

fredguth|6 months ago

RxDB is the OG? I thought it was PouchDB.

qweiopqweiop|6 months ago

It's starting to feel to me that a lot of tech is just converging on other platforms solutions. This for example sounds incredibly similar to how a mobile app works (on the surface). Of course it goes the other way too, with mobile tech taking declarative UIs from the Web.

_aobj|6 months ago

Check out "Distributed Quantum Computing across an optical network link" by D.Main. Et Al. 2024. As it seems that the goals of Linear, are closely aligned with distributed Quantum Computing. Having decided I need a distributed Website with quantum entanglement. For my Web shop to sell "Federation Crypto currency" I find this post very instructive, towards that goal. " It is possible to use so little javascript and css that it makes more sense to inline it. SSR enables this " Does this imply compile time goals of browser are potentially compatible with a Distributed Quantum Computing Website? a local-first approach

Gravityloss|6 months ago

Some problem on the site. Too much traffic?

    Secure Connection Failed
    An error occurred during a connection to bytemash.net. PR_END_OF_FILE_ERROR
    Error code: PR_END_OF_FILE_ERROR

jcusch|6 months ago

It looks like I was missing a www subdomain CNAME for the underlying github pages site. I think it's fixed now.

incorrecthorse|6 months ago

> For the uninitiated, Linear is a project management tool that feels impossibly fast. Click an issue, it opens instantly. Update a status and watch in a second browser, it updates almost as fast as the source. No loading states, no page refreshes - just instant, interactions.

How garbage the web has become for a low-latency click action being qualified as "impossibly fast". This is ridiculous.

mossTechnician|6 months ago

Hacker News comment sections are the only part of the internet that still feel "impossibly fast" to me. Even on Android, thousands of comments can scroll as fast as the OS permits, and the DOM is so simple that I've reopened day-old tabs to discover the page is still loaded. Even projects like Mastodon and Lemmy, which aren't beholden to modern web standards, have shifted to significant client-side scripting that lacks the finesse to appear performant.

o_m|6 months ago

Back in 2018 I worked for a client that required we used Jira. It was so slow that the project manager set everything up in Excel during our planning meetings. After the meeting she would manually transfer it to Jira. She spent most of her time doing this. Each click in the interface took multiple seconds to respond, so it was impossible to get into a flow.

jitl|6 months ago

A web request to a data center even with a very fast backend server will struggle to beat 8ms (120hz display) or even 16ms (60hz display), the budget for next frame painting a navigation. You need to have the data local to the device and ideally already in memory to hit 8ms navigation.

andy99|6 months ago

I also winced at "impossibly fast" and realize that it must refer to some technical perspective that is lost on most users. I'm not a front end dev, I use linear, I'd say I didn't notice speed, it seems to work about the same as any other web app. I don't doubt it's got cool optimizations, but I think they're lost on most people that use it. (I don't mean to say optimization isn't cool)

jallmann|6 months ago

Linear is actually so slow for me that I dread having to go into it and do stuff. I don’t care if the ticket takes 500ms to load, just give me the ticket and not a fake blinking cursor for 10 seconds or random refreshes while it (slowly) tries to re-sync.

Everything I read about Linear screams over-engineering to me. It is just a ticket tracker, and a rather painful one to use at that.

This seems to be endemic to the space though, eg Asana tried to invent their own language at one point.

lwansbrough|6 months ago

Trite remark. The author was referring to behaviour that has nothing to do with “how the web has become.”

It is specifically to do with behaviour that is enabled by using shared resources (like IndexedDB across multiple tabs), which is not simple HTML.

To do something similar over the network, you have until the next frame deadline. That’s 8-16ms. RTT. So 4ms out and back, with 0ms budget for processing. Good luck!

wim|6 months ago

Funny how reasonable performance is now treated as some impossible lost art on the web sometimes.

I posted a little clip [1] of development on a multiplayer IDE for tasks/notes (local-first+e2ee), and a lot of people asked if it was native, rust, GPU rendered or similar. But it's just web tech.

The only "secret ingredients" here are using plain ES6 (no frameworks/libs), having data local-first with background sync, and using a worker for off-UI-thread tasks. Fast web apps are totally doable on the modern web, and sync engines are a big part of it.

[1] https://x.com/wcools/status/1900188438755733857

fleabitdev|6 months ago

I was also surprised to read this, because Linear has always felt a little sluggish to me.

I just profiled it to double-check. On an M4 MacBook Pro, clicking between the "Inbox" and "My issues" tabs takes about 100ms to 150ms. Opening an issue, or navigating from an issue back to the list of issues, takes about 80ms. Each navigation includes one function call which blocks the main thread for 50ms - perhaps a React rendering function?

Linear has done very good work to optimise away network activity, but their performance bottleneck has now moved elsewhere. They've already made impressive improvements over the status quo (about 500ms to 1500ms for most dynamic content), so it would be great to see them close that last gap and achieve single-frame responsiveness.

zwnow|6 months ago

Web applications have become too big and heavy. Corps want to control everything. A simple example would be a simple note taking app which apparently also has to sync throughout devices. They are going to store every note you take on their servers, who knows if they really delete your deleted notes. They'll also track how often you visited your notes for whatever reasons. Wouldn't surprise me if the app also required geolocation and stuff like that for whatever reason. Mix that with lots of users and you will have loading times unheard of with small scale apps. Web apps should scale down but like with everything we need more more more bigger better faster.

andrepd|6 months ago

It is definitely ridiculous. It's not just a nitpick too, it's ludicrous how sloooow and laggy typing text in a monstrosity like Jira is, or just reading through an average news site. Makes everything feel like a slog.

tomwphillips|6 months ago

Indeed. I have been using it for 5-6 months in a new job and I didn't notice it being faster than the typical web app.

If anything it is slow because it is a pain to navigate. I have browser bookmarks for my most frequented pages.

captainregex|6 months ago

one of my day to day responsibilities involves using a portal tied to MSFT dynamics on the back end and it is the laggiest and most terrible experience ever. we used to have java apps that ran locally and then moved to this in the name of cloud migration and it feels like it was designed by someone whose product knowledge was limited to the first 2/5 lessons in a free Coursera (RIP) module

presentation|6 months ago

Since it’s so easy then I’m rooting for you to make some millions with performant replacements for other business tools, should be a piece of cake

OJFord|6 months ago

I don't know if 'the web' in general is fair, here the obvious comparison is Jira, which is dog slow & clunky.

rylan-talerico|6 months ago

I'm a big fan of local-first. InstantDB has productized it – worth looking into if you're interested in taking a local-first approach.

mizzao|6 months ago

Is this technical architecture so different from Meteor back in the day? Just curious for those who have a deeper understanding.

yanis_t|6 months ago

I don't get it. You still have to sync the state one way or another, network latency is still there.

Aldipower|6 months ago

Me neither. Considered we are talking about collaborative network applications, you are loosing the single-source-of-thruth (the server database) with the local first approach. And it just adds so much more complexity. Also, as your app grows, you probably end up to implement the business logic twice. On the server and locally. I really do not get it.

WickyNilliams|6 months ago

The latency is off the critical path with local first. You sync changes over the network sure, but your local mutations are stored directly and immediately in a local DB.

croes|6 months ago

But the user gets instant results

tommoor|6 months ago

If you want to work on Linear's sync infrastructure or product – we're hiring. The day-to-day DX is incredible.

theappsecguy|6 months ago

You should put pay bands on job listings to save everyone time and sanity.

marmalar|6 months ago

I'm curious what the WLB is like?

ivape|6 months ago

[deleted]

dewey|6 months ago

It’s a developer writing about a tool they like. If you’d call word of mouth an “ad” the I guess it’s one.

jcusch|6 months ago

An ad for what? I'm not associated with any of the projects mentioned.

yard2010|6 months ago

Best kind of ad to the best kind of service (I'm not affiliated, this is not an ad) - an organic one.

The enshitification that Jira went through is by itself an ad for Linear.

mbaranturkmen|6 months ago

How is this approach better than using react-query to persist storage which periodically sync the local storage and the server storage? Perhaps I am missing something.

petralithic|6 months ago

That approach is precisely what the new TanStack DB does, which if you don't know already has the same creator as React Query. The former extends the latter's principles to syncing, via ElectricSQL, both organizations have a partnership with each other.

0xblinq|6 months ago

Yes, you’re missing a lot of things. Like how to update that data when you are offline and have everything sync around when others have made updates to the same data in the meanwhile. Among many other concerns and use cases.