Show HN: InstantDB – A Modern Firebase
1145 points| nezaj | 1 year ago |github.com
Building modern apps these days involves a lot of schleps. For a basic CRUD app you need to spin up servers, wire up endpoints, integrate auth, add permissions, and then marshal data from the backend to the frontend and back again. If you want to deliver a buttery smooth user experience, you’ll need to add optimistic updates and rollbacks. We do these steps over and over for every feature we build, which can make it difficult to build delightful software. Could it be better?
We were senior and staff engineers at Facebook and Airbnb and had been thinking about this problem for years. In 2021, Stopa wrote an essay talking about how these schleps are actually database problems in disguise [1]. In 2022, Stopa wrote another essay sketching out a solution with a Firebase-like database with support for relations [2]. In the last two years we got the backing of James Tamplin (CEO of Firebase), became a team of 5 engineers, pushed almost ~2k commits, and today became open source.
Making a chat app in Instant is as simple as
function Chat() {
// 1. Read
const { isLoading, error, data } = useQuery({
messages: {},
});
// 2. Write
const addMessage = (message) => {
transact(tx.messages[id()].update(message));
}
// 3. Render!
return <UI data={data} onAdd={addMessage} />
}
Instant gives you a database you can subscribe to directly in the browser. You write relational queries in the shape of the data you want and we handle all the data fetching, permission checking, and offline caching. When you write transactions, optimistic updates and rollbacks are handled for you as well.Under the hood we save data to postgres as triples and wrote a datalog engine for fetching data [3]. We don’t expect you to write datalog queries so we wrote a graphql-like query language that doesn’t require any build step.
Taking inspiration from Asana’s WorldStore and Figma’s LiveGraph, we tail postgres’ WAL to detect novelty and use last-write-win semantics to handle conflicts [4][5]. We also handle websocket connections and persist data to IndexDB on web and AsyncStorage for React Native, giving you multiplayer and offline mode for free.
This is the kind of infrastructure Linear uses to power their sync and build better features faster [6]. Instant gives you this infrastructure so you can focus on what’s important: building a great UX for your users, and doing it quickly. We have auth, permissions, and a dashboard with a suite tools for you to explore and manage your data. We also support ephemeral capabilities like presence (e.g. sharing cursors) and broadcast (e.g. live reactions) [7][8].
We have a free hosted solution where we don’t pause projects, we don’t limit the number of active applications, and we have no restrictions for commercial use. We can do this because our architecture doesn’t require spinning up a separate servers for each app. When you’re ready to grow, we have paid plans that scale with you. And of course you can self host both the backend and the dashboard tools on your own.
Give us a spin today at https://instantdb.com/tutorial and see our code at https://github.com/instantdb/instant
We love feedback :)
[1] https://www.instantdb.com/essays/db_browser
[2] https://www.instantdb.com/essays/next_firebase
[3] https://www.instantdb.com/essays/datalogjs
[4] https://asana.com/inside-asana/worldstore-distributed-cachin...
[5] https://www.figma.com/blog/how-figmas-multiplayer-technology...
[6] https://www.youtube.com/live/WxK11RsLqp4?t=2175s
[+] [-] jamest|1 year ago|reply
Good luck, Joe, Stopa and team!
[+] [-] ashconnor|1 year ago|reply
It was jarring to find out that indexes are required for every combination of filters your app applies, but then you quickly realize that Firebase solves a particular problem and you're attempted to shoehorn into a problem-space better solved by something like Supabase.
It's not too dissimilar to DynamoDB vs RDB.
[+] [-] 999900000999|1 year ago|reply
It's really the definition of an managed database/datastore.
Do you see InstantDB as a drop in replacement ?
To be honest I don't want to have to worry about my backend. I want a place to effectively drop JSON docs and retract them later.
This is more than enough for a hobbyist project, though I imagine at scale things get might not work as well.
[+] [-] buggy6257|1 year ago|reply
[+] [-] robertlagrant|1 year ago|reply
[+] [-] 650REDHAIR|1 year ago|reply
Made me feel quite old that Firebase is no longer "modern" though...
[+] [-] Ozzie_osman|1 year ago|reply
[+] [-] nezaj|1 year ago|reply
[+] [-] bobbywilson0|1 year ago|reply
[+] [-] sibeliuss|1 year ago|reply
[+] [-] stopachka|1 year ago|reply
I updated the example to include the imports:
```
import { init, tx, id } from "@instantdb/react";
const db = init({ appId: process.env.NEXT_PUBLIC_APP_ID, });
function Chat() {
}```
What do you think?
[+] [-] android521|1 year ago|reply
[+] [-] lelo_tp|1 year ago|reply
[+] [-] krehwell|1 year ago|reply
[+] [-] codersfocus|1 year ago|reply
Unfortunately this area is still immature, and there aren't really great options but PowerSync was the least bad. I'll probably pair it with Supabase for the backend.
[+] [-] swalsh|1 year ago|reply
[+] [-] nezaj|1 year ago|reply
For what it's worth, we're built on top of Aurora and support relations so evolution should be much easier!
[1] https://mdp.github.io/2017/10/29/prototyping-in-the-age-of-n...
[+] [-] EasyMark|1 year ago|reply
[+] [-] mixmastamyk|1 year ago|reply
[+] [-] blixt|1 year ago|reply
Does Instant have a way to merge many frequent updates into fewer Postgres transactions while maintaining high frequency for multiplayer?
Regardless this is super cool for so many other things where you’re modifying more regular app data. Apps often have bugs when attempting to synchronize data across multiple endpoints and tend to drift over time when data mutation logic is spread across the code base. Just being able to treat the data as one big object usually helps even if it seems to go against some principles (like microservices but don’t get me started on why that fails more often than not due to the discipline it requires).
[+] [-] lewisl9029|1 year ago|reply
Apparently I signed up for Instant previously but completely forgot about it. Only realized I had an account when I went to the dashboard to find myself still logged in. I dug up the sign up email and apparently I signed up back in 2022, so some kind of default invalidation period on your auth tokens would definitely make me a bit more comfortable.
Regardless, I'm still as excited about the idea of a client-side, offline-first, realtime syncing db as ever, especially now that the space has really been picking up steam with new entrants showing up every few weeks.
One thing I was curious about is how well the system currently supports users with multiple emails? GitHub popularized this pattern, and these days it's pretty much table stakes in the dev tools space to be able to sign in once and use the same account across personal accounts and orgs associated with different emails.
Looking at the docs I'm getting the sense that there might be an assumption of 1 email per user in the user model currently. Is that correct? If so, any plans to evolve the model to become more flexible?
[+] [-] stopachka|1 year ago|reply
> One thing I was curious about is how well the system currently supports users with multiple emails? GitHub popularized this pattern, and these days it's pretty much table stakes in the dev tools space to be able to sign in once and use the same account across personal accounts and orgs associated with different emails
Right now there is an assumption of 1 `user` object per email. You could create an entity like `workspace` inside Instant, and tie multiple users together this way for now.
However, making the `user` support multiple identities, and creating recipes for common data models (like workspaces) is on the near-term roadmap.
[+] [-] coffeemug|1 year ago|reply
[+] [-] stopachka|1 year ago|reply
[+] [-] antidnan|1 year ago|reply
Congrats team!
[+] [-] breatheoften|1 year ago|reply
One of the things I find quite nice about firebase is the quite powerful separation between the logic of data retrieval / update and the enforcement of access policy -- if you understand it you can build the prototype on a happy path with barely any authorization enforcement and then add it later and have quite complete confidence that you aren't leaking data between users or allowing them to change something they shouldn't be able to. Although you do need to keep the way this system works in mind as you build and I have found that developers often don't really grasp the shape of these mechanisms at first
From what I can tell -- the instant system is different in that the permission logic is evaluated on the results of queries -- vs firebase which enforces whether the query is safe to run prior to it even being executed ...
[+] [-] the_duke|1 year ago|reply
Postgres also isn't terrible, but also not brilliant for that use case.
How has your experience been in that regard?
[+] [-] remolacha|1 year ago|reply
In ActiveRecord, I can do this:
```rb
post = Post.find_by(author: "John Smith")
post.author.email = "[email protected]"
post.save
```
In React/Vue/Solid, I want to express things like this:
```jsx
function BlogPostDetailComponent(...) {
}```
I don't want to think about joining up-front, and I want the ORM to give me an object-graph-like API, not a SQL-like API.
In ActiveRecord, I can fall back to SQL or build my ORM query with the join specified to avoid N+1s, but in most cases I can just act as if my whole object graph is in memory, which is the ideal DX.
[+] [-] stopachka|1 year ago|reply
Here are some parallels your example:
A. ActiveRecord:
```
post = Post.find_by(author: "John Smith") post.author.email = "[email protected]" post.save
```
B. Instant:
```
db.transact( tx.users[lookup('author', 'John Smith')].update({ email: '[email protected]' }), );
```
> In React/Vue/Solid, I want to say express things like this:
Here's what the React/Vue code would look like:
```
function BlogPostDetailComponent(props) {
}```
[+] [-] gr4vityWall|1 year ago|reply
Reference: https://react-tutorial.meteor.com/simple-todos/02-collection...
[+] [-] TheFragenTaken|1 year ago|reply
[+] [-] w10-1|1 year ago|reply
Other datalog engines support recursive queries, which makes my life so much easier. Can I do that now with this? Or is it on the roadmap?
I have fairly large and overlapping rules/queries. Is there any way to store parsed queries and combine them?
Also, why the same name as the (Lutris) Enhydra java database? Your domain is currently listed as a "failed company" from 1997-2000 (actual usage of the Java InstantDB was much longer)
Given that it's implemented clojure and some other datalog engines are in clojure, can you say anything about antecedents?Some other Clojure datalog implementations, most in open source
- Datomic is the long-standing market leader
- XTDB (MPL): https://github.com/xtdb/xtdb
- Datascript (EPL): https://github.com/tonsky/datascript
- Datalevin ((forking datascript, EPL): https://github.com/juji-io/datalevin
- datahike (forking datascript, EPL): https://github.com/replikativ/datahike
- Naga (EPL): https://github.com/quoll/naga
[+] [-] stopachka|1 year ago|reply
We don't currently expose the datalog engine. You _technically_ could use it, but that part of the query system changes much more quickly.
Queries results are also cached by default on the client.
> Other datalog engines support recursive queries, which makes my life so much easier. Can I do that now with this?
There's no shorthand for recursive queries yet, but it's on the roadmap. Today if you had a data model like 'blocks have child blocks', you wanted to get 3 levels deep, you could write:
```
useQuery({ blocks: { child: { child: {} } } });
```
> Also, why the same name as the (Lutris) Enhydra java database?
When we first thought of the idea for this project, our 'codename' was Instant. We didn't actually think we could get `instantdb.com` as a real domain name. But, after some sleuthing, we found that the email server for instantdb.com went to a gentleman in New Zealand. Seems like he nabbed it after Lutris shut down. We were about to buy the domain after.
> Given that it's implemented clojure and some other datalog engines are in clojure, can you say anything about antecedents?
Certainly. Datomic has had a huge influence on us. I first used it at a startup in 2014 (wit.ai) and enjoyed it.
Datalog and triples were critical for shipping Instant. The datalog syntax was simple enough that we could write a small query engine for the client. Triples were flexible enough to let us support relations. We wrote a bit about how helpful this was in this essay: https://www.instantdb.com/essays/next_firebase#another-appro...
We studied just about all the codebases you mentioned as we built Instant. Fun fact, datascript actually supports our in-memory cache on the server:
https://github.com/instantdb/instant/blob/main/server/src/in...
[+] [-] apavlo|1 year ago|reply
I think given that the original InstantDB died over two decades okay and is not widely known/remembered, reusing the name is fine.
[+] [-] webdevladder|1 year ago|reply
In the docs[1]:
> Instant uses a declarative syntax for querying. It's like GraphQL without the configuration.
Would you be interested in elaborating more about this decision/design?
[1] https://www.instantdb.com/docs/instaql
[+] [-] stopachka|1 year ago|reply
Our initial intuition was to expose a language like SQL in the frontend.
We decided against this approach for 3 reasons:
1. Adding SQL would mean we would have to bundle SQLite, which would add a few hundred kilobytes to a bundle
2. SQL itself has a large spec, and would be difficult to make reactive
3. What's worst: most of the time on the frontend you want to make tree-like queries (users -> posts -> comments). Writing queries that like that is relatively difficult in SQL [1]
We wanted a language that felt intuitive on the frontend. We ended up gravitating towards something like GraphQL. But then, why not use GraphQL itself? Mainly because it's a separate syntax from javascript.
We wanted to use data structures instead of strings when writing apps. Datastructures let you manipulate and build new queries.
For example, if you are making a table with filters, you could manipulate the query to include the filters. [2]
So we thought: what if you could express GraphQL as javascript objects?
``` { users: { posts: { comments: { } } } ```
This made frontend queries intuitive, and you can 'generate' these objects programatically.
For more info about this, we wrote an essay about the initial design journey here: https://www.instantdb.com/essays/next_firebase
[1] We wrote the language choice here: https://www.instantdb.com/essays/next_firebase#language
[2] We programatically generate queries for the Instant Explorer itself: https://github.com/instantdb/instant/blob/main/client/www/li...
[+] [-] remolacha|1 year ago|reply
I appreciate that you're thinking about relational data and about permissions. I've seen a bunch of sync engine projects that don't have a good story for those things.
imo, the more that you can make the ORM feel like ActiveRecord, the better.
[+] [-] stopachka|1 year ago|reply
[+] [-] TeeWEE|1 year ago|reply
However for our use case we want total control over the server database. And wanted to store it in normalized tables.
The solution we went for us is streaming the mutation stream (basically the WAL) from/to client and server. And use table stream duality to store them in a table.
Permissions are handled on a table level.
When a client writes it sends a mutation to the servers. Or queues it locally if offline. Writes never conflict: we employ a CRDT “last write wins” policy.
Queries are represented by objects and need to be implemented both in Postgres as wel as SQLLite (if you want offline querying, often we don’t). A query we implement for small tables is: “SELECT *”.
Note that the result set being queried is updated realtime for any mutation coming in.
It’s by default not enforcing relational constraints on the clientside so no rollbacks needed.
However you can set a table in different modes: - online synchronous writes only: allows us to have relational constraints. And to validate the creation against other server only business rules.
The tech stack is Kotlin on client (KMM) and server, websocket for streaming. Kafka for all mutations messaging. And vanilla Postgres for storing.
The nice thing is that we now have a Kafka topic that contains all mutations that we can listen to. For example to send emails or handle other use cases.
For every table you: - create a serializable Kotlin data class - create a Postgres table on the server - implement reading and writing that data, and custom queries
Done: the apps have offline support for reading a single entity and upserts. Querying require to be online if not implemented on the client.
[+] [-] RoboTeddy|1 year ago|reply
(2) When a schema is provided, is it fully enforced? Is there a way to do migrations?
Migrations are the only remaining challenge I can think of that could screw up this tool long-term unless a good approach gets baked in early. (They're critically important + very often done poorly or not supported.) When you're dealing with a lot of data in a production app, definitely want some means of making schema changes in a safe way. Also important for devex when working on a project with multiple people — need a way to sync migrations across developers.
Stuff like scalability — not worried about that — this tool seems fundamentally possible to scale and your team is smart :) Migrations though... hope you focus on it early if you haven't yet!
[+] [-] stopachka|1 year ago|reply
> When a schema is provided, is it fully enforced?
Right now the schema understands the difference between attributes and references. If you specify uniqueness constraints, they are also enforced. We haven’t supported string / number yet, but are actively working towards it. Once that’s supported, we can unlock sort by queries as well!
> Migrations though... hope you focus on it early if you haven't yet!
We don’t have first class support for migrations yet, but are definitely thinking about it. Currently folks use the admin SDK to write migration scripts.
Question: do you have any favorite systems for migrations?
[+] [-] taw1285|1 year ago|reply
[+] [-] projektfu|1 year ago|reply
[+] [-] stopachka|1 year ago|reply
[+] [-] monomers|1 year ago|reply
Say I have an InstantDB app, can I stream events from the instant backend to somewhere else?
[+] [-] stopachka|1 year ago|reply
Instant is completely open source. We have no private repos, so in the event that you want to run the system yourself, you can fork it.
> how to make it part of a larger system.
If you have an existing app, right now I would suggest storing the parts that you want to reactive on Instant.
We're working on a Postgres adapter. This would let you connect an existing database, and use Instant for the real-time sync. If you'd be interested in using this, reach out to us at [email protected]!
[+] [-] IanCal|1 year ago|reply
I'm not sure about how things grow from here in terms of larger aggregates and more complex queries though so am slightly worried I'm painting myself into a corner. Do you have any guides or pointers here? Or key areas people shouldn't use your db?