top | item 41322281

Show HN: InstantDB – A Modern Firebase

1145 points| nezaj | 1 year ago |github.com

Hey there HN! We’re Joe and Stopa, and today we’re open sourcing InstantDB, a client-side database that makes it easy to build real-time and collaborative apps like Notion and Figma.

Building modern apps these days involves a lot of schleps. For a basic CRUD app you need to spin up servers, wire up endpoints, integrate auth, add permissions, and then marshal data from the backend to the frontend and back again. If you want to deliver a buttery smooth user experience, you’ll need to add optimistic updates and rollbacks. We do these steps over and over for every feature we build, which can make it difficult to build delightful software. Could it be better?

We were senior and staff engineers at Facebook and Airbnb and had been thinking about this problem for years. In 2021, Stopa wrote an essay talking about how these schleps are actually database problems in disguise [1]. In 2022, Stopa wrote another essay sketching out a solution with a Firebase-like database with support for relations [2]. In the last two years we got the backing of James Tamplin (CEO of Firebase), became a team of 5 engineers, pushed almost ~2k commits, and today became open source.

Making a chat app in Instant is as simple as

    function Chat() {
      // 1. Read
      const { isLoading, error, data } = useQuery({
        messages: {},
      });
    
      // 2. Write
      const addMessage = (message) => {
        transact(tx.messages[id()].update(message));
      }
    
      // 3. Render!
      return <UI data={data} onAdd={addMessage} />
    }
Instant gives you a database you can subscribe to directly in the browser. You write relational queries in the shape of the data you want and we handle all the data fetching, permission checking, and offline caching. When you write transactions, optimistic updates and rollbacks are handled for you as well.

Under the hood we save data to postgres as triples and wrote a datalog engine for fetching data [3]. We don’t expect you to write datalog queries so we wrote a graphql-like query language that doesn’t require any build step.

Taking inspiration from Asana’s WorldStore and Figma’s LiveGraph, we tail postgres’ WAL to detect novelty and use last-write-win semantics to handle conflicts [4][5]. We also handle websocket connections and persist data to IndexDB on web and AsyncStorage for React Native, giving you multiplayer and offline mode for free.

This is the kind of infrastructure Linear uses to power their sync and build better features faster [6]. Instant gives you this infrastructure so you can focus on what’s important: building a great UX for your users, and doing it quickly. We have auth, permissions, and a dashboard with a suite tools for you to explore and manage your data. We also support ephemeral capabilities like presence (e.g. sharing cursors) and broadcast (e.g. live reactions) [7][8].

We have a free hosted solution where we don’t pause projects, we don’t limit the number of active applications, and we have no restrictions for commercial use. We can do this because our architecture doesn’t require spinning up a separate servers for each app. When you’re ready to grow, we have paid plans that scale with you. And of course you can self host both the backend and the dashboard tools on your own.

Give us a spin today at https://instantdb.com/tutorial and see our code at https://github.com/instantdb/instant

We love feedback :)

[1] https://www.instantdb.com/essays/db_browser

[2] https://www.instantdb.com/essays/next_firebase

[3] https://www.instantdb.com/essays/datalogjs

[4] https://asana.com/inside-asana/worldstore-distributed-cachin...

[5] https://www.figma.com/blog/how-figmas-multiplayer-technology...

[6] https://www.youtube.com/live/WxK11RsLqp4?t=2175s

[7] https://www.joewords.com/posts/cursors

[8] https://www.instantdb.com/examples?#5-reactions

297 comments

order
[+] jamest|1 year ago|reply
[Firebase founder] The thing I'm excited about w/Instant is the quad-fecta of offline + real-time + relational queries + open source. The amount of requests we had for relational queries was off-the-charts (and is a hard engineering problem), and, while the Firebase clients are OSS, I failed to open source a reference backend (a longer story).

Good luck, Joe, Stopa and team!

[+] ashconnor|1 year ago|reply
I always assumed that an architectural decision had prevented relational queries in Firebase.

It was jarring to find out that indexes are required for every combination of filters your app applies, but then you quickly realize that Firebase solves a particular problem and you're attempted to shoehorn into a problem-space better solved by something like Supabase.

It's not too dissimilar to DynamoDB vs RDB.

[+] 999900000999|1 year ago|reply
Thanks for creating Firebase!

It's really the definition of an managed database/datastore.

Do you see InstantDB as a drop in replacement ?

To be honest I don't want to have to worry about my backend. I want a place to effectively drop JSON docs and retract them later.

This is more than enough for a hobbyist project, though I imagine at scale things get might not work as well.

[+] buggy6257|1 year ago|reply
This is an aside but “trifecta but with four” actually has an awesome name: “Superfecta”!
[+] robertlagrant|1 year ago|reply
You probably heard this a million times but I still remember trying that simple firebase demo of draw in one box; see the results in another and being amazed. That was one of my pushes out of boring enterprise software death by configuration and into software creation based on modern OSS products.
[+] 650REDHAIR|1 year ago|reply
Was pretty neat to see your investment/involvement!

Made me feel quite old that Firebase is no longer "modern" though...

[+] Ozzie_osman|1 year ago|reply
Awesome to see this launch and to see James Tamplin backing this project.
[+] sibeliuss|1 year ago|reply
One bit of feedback: Its always appreciated when code examples on websites are complete. Your example isn't complete -- where's the `transact` import coming from, or `useQuery`? Little minor details that go far as your product scales out to a wider user base.
[+] stopachka|1 year ago|reply
Thank you for the feedback, this makes sense!

I updated the example to include the imports:

```

import { init, tx, id } from "@instantdb/react";

const db = init({ appId: process.env.NEXT_PUBLIC_APP_ID, });

function Chat() {

  // 1. Read
  const { isLoading, error, data } = db.useQuery({
    messages: {},
  });

  // 2. Write
  const addMessage = (message) => {
    db.transact(tx.messages[id()].update(message));
  };

  // 3. Render!
  return <UI data={data} onAdd={addMessage} />;
}

```

What do you think?

[+] android521|1 year ago|reply
Yes. This gives users the vibe of “ this is obvious, if you don’t know it , you are dumb “ .
[+] lelo_tp|1 year ago|reply
lol i was wondering the same thing
[+] codersfocus|1 year ago|reply
For those looking for alternatives to the offline first model, I settled on PowerSync. Runner up was WatermelonDB (don't let the name fool you.) ElectricSQL is still too immature, they announced a rewrite this month. CouchDB / PocketDB aren't really up to date anymore.

Unfortunately this area is still immature, and there aren't really great options but PowerSync was the least bad. I'll probably pair it with Supabase for the backend.

[+] swalsh|1 year ago|reply
I'm wary of stuff like this, probably really useful to rapidly iterate.... but what a maintence nightmare after 10 years and your schema has evolved 100 times, but you have existing customers in various state of completeness. I avoided firebase when it came out for this reason. I had a few bad experiences with maintaining applications built on top of Mongo that made it to production. It was a nightmare.
[+] EasyMark|1 year ago|reply
This is why I’ve always stayed well behind the bleeding edge but still within earshot if anything comes along that sounds like it’s of interest to me. I usually code to work and not for pleasure, although I do a little web programming for friends, but I still use jQuery and Typescript for that, I think the only “new”thing I use is tailwind, which is a bit of a game changer for what I like to do, I never liked CSS but it worked well enough for my needs.
[+] mixmastamyk|1 year ago|reply
Did they say schemas aren’t supported, or is that implied by the firebase label?
[+] blixt|1 year ago|reply
I saw the reference to “apps like Figma” and as one of the people that worked on Framer’s (also a canvas based app) database which is also local+multiplayer I find it hard to imagine how to effectively synchronize canvas data with a relational database like Postgres effectively. Users will frequently work on thousands of nodes in parallel and perform dragging updates that occur at 60 FPS and should at least be propagated to other clients frequently.

Does Instant have a way to merge many frequent updates into fewer Postgres transactions while maintaining high frequency for multiplayer?

Regardless this is super cool for so many other things where you’re modifying more regular app data. Apps often have bugs when attempting to synchronize data across multiple endpoints and tend to drift over time when data mutation logic is spread across the code base. Just being able to treat the data as one big object usually helps even if it seems to go against some principles (like microservices but don’t get me started on why that fails more often than not due to the discipline it requires).

[+] lewisl9029|1 year ago|reply
Congrats on the launch! :)

Apparently I signed up for Instant previously but completely forgot about it. Only realized I had an account when I went to the dashboard to find myself still logged in. I dug up the sign up email and apparently I signed up back in 2022, so some kind of default invalidation period on your auth tokens would definitely make me a bit more comfortable.

Regardless, I'm still as excited about the idea of a client-side, offline-first, realtime syncing db as ever, especially now that the space has really been picking up steam with new entrants showing up every few weeks.

One thing I was curious about is how well the system currently supports users with multiple emails? GitHub popularized this pattern, and these days it's pretty much table stakes in the dev tools space to be able to sign in once and use the same account across personal accounts and orgs associated with different emails.

Looking at the docs I'm getting the sense that there might be an assumption of 1 email per user in the user model currently. Is that correct? If so, any plans to evolve the model to become more flexible?

[+] stopachka|1 year ago|reply
Noted about the refresh tokens, thank you!

> One thing I was curious about is how well the system currently supports users with multiple emails? GitHub popularized this pattern, and these days it's pretty much table stakes in the dev tools space to be able to sign in once and use the same account across personal accounts and orgs associated with different emails

Right now there is an assumption of 1 `user` object per email. You could create an entity like `workspace` inside Instant, and tie multiple users together this way for now.

However, making the `user` support multiple identities, and creating recipes for common data models (like workspaces) is on the near-term roadmap.

[+] coffeemug|1 year ago|reply
Congrats on the launch! I think Firebase was started in 2011, and it's incredible that 13 years later the problem is still unsolved in an open way. We took a shot at this at RethinkDB but fell short. If I were doing this again today, Instant is how I would build it. Rooting for you!
[+] stopachka|1 year ago|reply
I really appreciate your message Slava. Your essays were really influential for us.
[+] antidnan|1 year ago|reply
I've been using Instant for about 6 months and have been very happy. Realtime, relational, and offline were the most important things for us, building out a relatively simple schema (users, files, projects, teams) that also is local first. Tried a few others unsuccessfully and after Instant, haven't looked back.

Congrats team!

[+] breatheoften|1 year ago|reply
What's the short summary of how the authorization system works for this?

One of the things I find quite nice about firebase is the quite powerful separation between the logic of data retrieval / update and the enforcement of access policy -- if you understand it you can build the prototype on a happy path with barely any authorization enforcement and then add it later and have quite complete confidence that you aren't leaking data between users or allowing them to change something they shouldn't be able to. Although you do need to keep the way this system works in mind as you build and I have found that developers often don't really grasp the shape of these mechanisms at first

From what I can tell -- the instant system is different in that the permission logic is evaluated on the results of queries -- vs firebase which enforces whether the query is safe to run prior to it even being executed ...

[+] the_duke|1 year ago|reply
I've found triple stores to have pretty poor performance when most of your queries fetch full objects, or many fields of the same object, which in the real world seems to be very common.

Postgres also isn't terrible, but also not brilliant for that use case.

How has your experience been in that regard?

[+] remolacha|1 year ago|reply
I really want an ActiveRecord-like experience.

In ActiveRecord, I can do this:

```rb

post = Post.find_by(author: "John Smith")

post.author.email = "[email protected]"

post.save

```

In React/Vue/Solid, I want to express things like this:

```jsx

function BlogPostDetailComponent(...) {

  // `subscribe` or `useSnapshot` or whatever would be the hook that gives me a reactive post object

  const post = subscribe(Posts.find(props.id));

  function updateAuthorName(newName) {
    // This should handle the join between posts and authors, optimistically update the UI

    post.author.name = newName;

    // This should attempt to persist any pending changes to browser storage, then
    // sync to remote db, rolling back changes if there's a failure, and
    // giving me an easy way to show an error toast if the update failed. 

    post.save();
  } 

  return (
    <>
      ...
    </>
  )
}

```

I don't want to think about joining up-front, and I want the ORM to give me an object-graph-like API, not a SQL-like API.

In ActiveRecord, I can fall back to SQL or build my ORM query with the join specified to avoid N+1s, but in most cases I can just act as if my whole object graph is in memory, which is the ideal DX.

[+] stopachka|1 year ago|reply
Absolutely. Instant has similar design goals to Rails and ActiveRecord

Here are some parallels your example:

A. ActiveRecord:

```

post = Post.find_by(author: "John Smith") post.author.email = "[email protected]" post.save

```

B. Instant:

```

db.transact( tx.users[lookup('author', 'John Smith')].update({ email: '[email protected]' }), );

```

> In React/Vue/Solid, I want to say express things like this:

Here's what the React/Vue code would look like:

```

function BlogPostDetailComponent(props) {

  // `useQuery` is equivelant to the `subscribe` that you mentioned:

  const { isLoading, data, error } = db.useQuery({posts: {author: {}, $: {where: { id: props.id }, } })
  
  if (isLoading) return ...
  
  if (error) return .. 
  
  function updateAuthorName(newName) {
  
    // `db.transact` does what you mentioned: 
    // it attempts to persist any pending changes to browser storage, then
    // sync to remote db, rolling back changes if there's a failure, and
    // gives an easy way to show an error toast if the update failed. (it's awaitable)
  
    db.transact(
      tx.authors[author.id].update({name: newName})
    )
  
  }

  return (
    <>
      ...
    </>
  )
}

```

[+] w10-1|1 year ago|reply
Is the datalog engine exposed? Is there any way to cache parsed queries?

Other datalog engines support recursive queries, which makes my life so much easier. Can I do that now with this? Or is it on the roadmap?

I have fairly large and overlapping rules/queries. Is there any way to store parsed queries and combine them?

Also, why the same name as the (Lutris) Enhydra java database? Your domain is currently listed as a "failed company" from 1997-2000 (actual usage of the Java InstantDB was much longer)

   https://dbdb.io/db/instantdb
Given that it's implemented clojure and some other datalog engines are in clojure, can you say anything about antecedents?

Some other Clojure datalog implementations, most in open source

- Datomic is the long-standing market leader

- XTDB (MPL): https://github.com/xtdb/xtdb

- Datascript (EPL): https://github.com/tonsky/datascript

- Datalevin ((forking datascript, EPL): https://github.com/juji-io/datalevin

- datahike (forking datascript, EPL): https://github.com/replikativ/datahike

- Naga (EPL): https://github.com/quoll/naga

[+] stopachka|1 year ago|reply
> Is the datalog engine exposed? Is there any way to cache parsed queries?

We don't currently expose the datalog engine. You _technically_ could use it, but that part of the query system changes much more quickly.

Queries results are also cached by default on the client.

> Other datalog engines support recursive queries, which makes my life so much easier. Can I do that now with this?

There's no shorthand for recursive queries yet, but it's on the roadmap. Today if you had a data model like 'blocks have child blocks', you wanted to get 3 levels deep, you could write:

```

useQuery({ blocks: { child: { child: {} } } });

```

> Also, why the same name as the (Lutris) Enhydra java database?

When we first thought of the idea for this project, our 'codename' was Instant. We didn't actually think we could get `instantdb.com` as a real domain name. But, after some sleuthing, we found that the email server for instantdb.com went to a gentleman in New Zealand. Seems like he nabbed it after Lutris shut down. We were about to buy the domain after.

> Given that it's implemented clojure and some other datalog engines are in clojure, can you say anything about antecedents?

Certainly. Datomic has had a huge influence on us. I first used it at a startup in 2014 (wit.ai) and enjoyed it.

Datalog and triples were critical for shipping Instant. The datalog syntax was simple enough that we could write a small query engine for the client. Triples were flexible enough to let us support relations. We wrote a bit about how helpful this was in this essay: https://www.instantdb.com/essays/next_firebase#another-appro...

We studied just about all the codebases you mentioned as we built Instant. Fun fact, datascript actually supports our in-memory cache on the server:

https://github.com/instantdb/instant/blob/main/server/src/in...

[+] apavlo|1 year ago|reply
This is from me. I didn't realize the connection to Lutris + Enhydra. It should be listed as a "Acquired Company" + "Abandoned Project". Wikipedia also says that it lasted until 2001. Usage is different from development/maintenance. I will update the entry for the old InstantDB and add an entry for this new InstantDB.

I think given that the original InstantDB died over two decades okay and is not widely known/remembered, reusing the name is fine.

[+] webdevladder|1 year ago|reply
As a potential dev user this looks really intriguing, hitting all of the main points I was looking for. I build apps in this space, and the open source alternatives I've evaluated are lacking specifically in "live queries" or don't use Postgres. The docs look great too.

In the docs[1]:

> Instant uses a declarative syntax for querying. It's like GraphQL without the configuration.

Would you be interested in elaborating more about this decision/design?

[1] https://www.instantdb.com/docs/instaql

[+] stopachka|1 year ago|reply
> Would you be interested in elaborating more about this decision/design?

Our initial intuition was to expose a language like SQL in the frontend.

We decided against this approach for 3 reasons:

1. Adding SQL would mean we would have to bundle SQLite, which would add a few hundred kilobytes to a bundle

2. SQL itself has a large spec, and would be difficult to make reactive

3. What's worst: most of the time on the frontend you want to make tree-like queries (users -> posts -> comments). Writing queries that like that is relatively difficult in SQL [1]

We wanted a language that felt intuitive on the frontend. We ended up gravitating towards something like GraphQL. But then, why not use GraphQL itself? Mainly because it's a separate syntax from javascript.

We wanted to use data structures instead of strings when writing apps. Datastructures let you manipulate and build new queries.

For example, if you are making a table with filters, you could manipulate the query to include the filters. [2]

So we thought: what if you could express GraphQL as javascript objects?

``` { users: { posts: { comments: { } } } ```

This made frontend queries intuitive, and you can 'generate' these objects programatically.

For more info about this, we wrote an essay about the initial design journey here: https://www.instantdb.com/essays/next_firebase

[1] We wrote the language choice here: https://www.instantdb.com/essays/next_firebase#language

[2] We programatically generate queries for the Instant Explorer itself: https://github.com/instantdb/instant/blob/main/client/www/li...

[+] remolacha|1 year ago|reply
This is awesome. I know that a lot of people are looking for something like the Linear sync engine.

I appreciate that you're thinking about relational data and about permissions. I've seen a bunch of sync engine projects that don't have a good story for those things.

imo, the more that you can make the ORM feel like ActiveRecord, the better.

[+] stopachka|1 year ago|reply
Thank you. We admire ActiveRecord's DSL. I especially like their `validation` helpers, simple error reporting, and the `before` / `after` create hooks.
[+] TeeWEE|1 year ago|reply
Very nice!

However for our use case we want total control over the server database. And wanted to store it in normalized tables.

The solution we went for us is streaming the mutation stream (basically the WAL) from/to client and server. And use table stream duality to store them in a table.

Permissions are handled on a table level.

When a client writes it sends a mutation to the servers. Or queues it locally if offline. Writes never conflict: we employ a CRDT “last write wins” policy.

Queries are represented by objects and need to be implemented both in Postgres as wel as SQLLite (if you want offline querying, often we don’t). A query we implement for small tables is: “SELECT *”.

Note that the result set being queried is updated realtime for any mutation coming in.

It’s by default not enforcing relational constraints on the clientside so no rollbacks needed.

However you can set a table in different modes: - online synchronous writes only: allows us to have relational constraints. And to validate the creation against other server only business rules.

The tech stack is Kotlin on client (KMM) and server, websocket for streaming. Kafka for all mutations messaging. And vanilla Postgres for storing.

The nice thing is that we now have a Kafka topic that contains all mutations that we can listen to. For example to send emails or handle other use cases.

For every table you: - create a serializable Kotlin data class - create a Postgres table on the server - implement reading and writing that data, and custom queries

Done: the apps have offline support for reading a single entity and upserts. Querying require to be online if not implemented on the client.

[+] RoboTeddy|1 year ago|reply
(1) This is awesome. Feels like this wraps enough complexity that it won't just be a toy / for prototyping.

(2) When a schema is provided, is it fully enforced? Is there a way to do migrations?

Migrations are the only remaining challenge I can think of that could screw up this tool long-term unless a good approach gets baked in early. (They're critically important + very often done poorly or not supported.) When you're dealing with a lot of data in a production app, definitely want some means of making schema changes in a safe way. Also important for devex when working on a project with multiple people — need a way to sync migrations across developers.

Stuff like scalability — not worried about that — this tool seems fundamentally possible to scale and your team is smart :) Migrations though... hope you focus on it early if you haven't yet!

[+] stopachka|1 year ago|reply
Thank you for the kind words!

> When a schema is provided, is it fully enforced?

Right now the schema understands the difference between attributes and references. If you specify uniqueness constraints, they are also enforced. We haven’t supported string / number yet, but are actively working towards it. Once that’s supported, we can unlock sort by queries as well!

> Migrations though... hope you focus on it early if you haven't yet!

We don’t have first class support for migrations yet, but are definitely thinking about it. Currently folks use the admin SDK to write migration scripts.

Question: do you have any favorite systems for migrations?

[+] taw1285|1 year ago|reply
This looks fantastic. I want to recommend this to my team. We are a small consulting team building apps for clients. I have a few questions to help me pitch my team and clients better: 1. the usual "vendor locked in". Is there a recommended escape hatch? 2. any big clients on this yet or at what scale do you expect people to start rolling their in house product
[+] projektfu|1 year ago|reply
It reminds me of the data half of Meteor, but it looks better thought-out and, obv., not based on Mongo. Nice work.
[+] stopachka|1 year ago|reply
Thank you. Meteor was definitely an inspiration.
[+] monomers|1 year ago|reply
I'm missing clarity about how do I escape Instant DB when I need to, and how to make it part of a larger system.

Say I have an InstantDB app, can I stream events from the instant backend to somewhere else?

[+] stopachka|1 year ago|reply
> I'm missing clarity about how do I escape Instant DB when I need to, and how to make it part of a larger system.

Instant is completely open source. We have no private repos, so in the event that you want to run the system yourself, you can fork it.

> how to make it part of a larger system.

If you have an existing app, right now I would suggest storing the parts that you want to reactive on Instant.

We're working on a Postgres adapter. This would let you connect an existing database, and use Instant for the real-time sync. If you'd be interested in using this, reach out to us at [email protected]!

[+] IanCal|1 year ago|reply
I've just used this to start a bouldering app, so far has been extremely simple, great work.

I'm not sure about how things grow from here in terms of larger aggregates and more complex queries though so am slightly worried I'm painting myself into a corner. Do you have any guides or pointers here? Or key areas people shouldn't use your db?