mikelehen's comments

mikelehen | 8 years ago | on: Cloud Firestore: A New Document Database for Apps

Thanks for the feedback. I think you're right and we're interested in exploring what we can do to help people more in the future. One of the really nice things about Cloud Firestore is that documents are versioned with timestamps in such a way that we could definitely detect and expose conflicts and let you decide how to deal with them... It's mostly a matter of identifying the common use cases and then figuring out the right API to make them possible without going too far into the deep end of conflict resolution.

mikelehen | 8 years ago | on: Cloud Firestore: A New Document Database for Apps

Good point. read-modify-write transactions are a good way to detect conflicts and get a chance to handle them, but they're unfortunately limited to while the client is online. If the client is offline, the transaction will fail so they're not useful for general conflict resolution. This was an intentional decision because there's not a straightforward way to preserve the intent of the transaction across app restarts. But there may be options for adding some sort of conflict resolution strategy in the future that leverages the same underlying primitives that transactions use today.

mikelehen | 8 years ago | on: Cloud Firestore: A New Document Database for Apps

Good question, and to answer this well we should probably do a blog post or something. In the meantime you could dig into the code since the clients are all open source. :-)

But basically, sync is split into two halves: writes and listens. Clients store pending writes locally until they're flushed to the backend (which could be a long time if the app is running offline). While online, listen results are streamed from the backend and persisted in a local client cache so that the results will also be visible while offline (and any pending writes are merged into this offline view). When a client comes back online, it flushes its pending writes to the backend which are executed in a last-write-wins manner (see my answer above to ibdknox for more details on this). To resume listens, the client can use a "resume token" which allows the backend to quickly get the client back up-to-date without needing to re-send already retrieved results (there are some nuances here depending on how old the resume token is, etc.).

mikelehen | 8 years ago | on: Cloud Firestore: A New Document Database for Apps

This works similar to the Realtime Database in that it's last-write wins (where in the offline case, "last" is the last person to come back online and send their write to the backend). This model is very easy for developers to understand and directly solves many use cases, especially since we allow very granular writes which reduces the risk of conflicts. But for more complex use cases, you can get clever and implement things like OT conflict resolution as a layer on top of last-write wins, e.g. similar to how we implemented collaborative editing with www.firepad.io on the realtime database.

PS: Hi Chris! :-)

mikelehen | 12 years ago | on: Introducing Firebase Hosting

Great question. This is a very real pain point with dynamic content in today's world of bots / crawlers. Many sites right now are completely or partially invisible to crawlers.

As you point out, pre-rendering content is the prescribed way to solve this and there are some existing solutions (prerender.io, brombone, etc.) that are a good start, but this is still a confusing / hard problem for people to solve when they'd like to focus on building their app instead.

So we're keenly looking into how we can best integrate with these sorts of services or provide our own solution as part of our hosting offering. Stay tuned!

mikelehen | 12 years ago | on: Introducing Firebase Hosting

This is a big question, and I'm biased by working at Firebase, so I'd welcome somebody from the community chiming in with their experiences. But one key differentiator worth mentioning is the realtime aspect of Firebase.

We believe that modern apps should be client-side apps that update in realtime as changes happen, without having to refresh the page or continually poll the server for updates. So this is baked into the core of Firebase. All of our features and APIs (and our new Hosting service!) were designed around this concept of how modern apps should be built.

mikelehen | 12 years ago | on: Introducing Firebase Hosting

In short, both. :-) This was a commonly-voiced pain point for our existing customers and fits very well with our vision to make Firebase the best platform for building modern apps.

But when we do something, we like to do it "right" and so we also think Firebase Hosting comes with a very compelling feature set (Simple Deploy/Rollback, Automatically-provisioned SSL, and a global CDN). So we're optimistic it'll also attract new developers to the Firebase platform.

mikelehen | 12 years ago | on: Introducing Firebase Hosting

Thanks for the feedback! We'd love to know what benchmarks would make you feel more comfortable. Internally, we have a lot of monitoring and diagnostics to make sure everything is running optimally. Downtime like yesterday's is rare and will become even rarer as we continue to advance our infrastructure.

In general, I agree with your point though. That's why I'd recommend using 3rd-party monitoring / measurement, even if we did expose more benchmarks for you. It's important to understand your external dependencies and verify they meet the service level you require.

mikelehen | 12 years ago | on: Introducing Firebase Hosting

Yes. We think our hosting service and realtime backend complement each other nicely for building modern web apps, but you certainly don't have to use both. :-)

mikelehen | 12 years ago | on: Introducing Firebase Hosting

For context, 2000 concurrent connections would be quite a large site. If you're hitting 2000 concurrents, $500 probably wouldn't be an issue for you. It's also worth noting that Firebase employs burstable billing at the 95th percentile, so only sustained overuse within the monthly billing period will result in a surcharge.

As for why we charge for connections in general, they do tend to be the most expensive thing to scale.  They're also useful as a proxy of how "big" a site is (in terms of users).  They're kind of the analog to "page views" in today's world of single-page-apps that update in real-time

mikelehen | 12 years ago | on: Introducing Firebase Hosting

Thanks for the feedback! Now that we've got the core deploy / rollback tooling in place, we're definitely looking for ways to plug into other common workflows (git, Dropbox, etc.). Stay tuned!

mikelehen | 12 years ago | on: Introducing Firebase Hosting

Great suggestion! The difference between being on a CDN and not is really night and day. Every serious site deserves to be on a CDN. Your users will thank you for the faster page loads. :-)

mikelehen | 12 years ago | on: Datastore API (beta)

Firebase dev here. :-) Firebase does handle offline syncing / conflict resolution.

If you have a Firebase app open and you lose network connectivity, the app still continues to work fine and any modifications you make will be synced back to Firebase when you regain connectivity.

What we don't (quite yet) do is handle the case where you make changes, kill the app (without regaining network connectivity), restart the app, and then get network connectivity, but that's coming soon.

As for other differences, Dropbox Datastore seems to be tightly tied to Dropbox, so the end user for your app must have a Dropbox account (and if they want to collaborate with other users, those users must also have Dropbox accounts, etc.). So it's really for building apps on top of Dropbox.

Firebase and Parse are just generic backends, with no ties to other services, so your end users don't need anything to use your app.

As for differentiating Parse and Firebase, Firebase deals with data in real-time, pushing updates to apps as soon as data changes. Parse is a traditional request/response model, where your app has to explicitly "refresh" to get new data.

mikelehen | 13 years ago | on: Announcing Firepad — Our Open Source Collaborative Text Editor

[I think HN is throttling us; I had to wait a while before a reply button appeared. Feel free to email me (michael at firebase) if you want to continue the conversation.]

If the standard mitigation strategies (adding authentication, banning malicious users, etc.) aren't enough, and you're worried about people breaking the synchronization, I agree you'd need to move the checkpointing logic to node.js server code. Sounds like a good example app for me to write when I've caught up on sleep and have some free time. :-)

We're also looking to do a security v2 in the future to expand on our existing security rule capabilities and we've discussed going the "real code" route or else allowing tighter integration with your own server-side node.js/firebase code.

mikelehen | 13 years ago | on: Announcing Firepad — Our Open Source Collaborative Text Editor

1) The client ignores invalid history items (unless there's a bug). So while you can pollute the Firebase data if you desire, it shouldn't affect the behavior of the app in any way. (i.e. Other than the checkpointing thing you brought up, you can't corrupt the history.)

That said, Firebase is certainly pushing the envelope in terms of what you would normally do with client-only code. =] And with that comes some challenges. In some ways Firebase is more like a peer-to-peer system than a traditional client-server system (since the Firebase server isn't doing complete data validation / processing). This sometimes affects the way you write code (doing extra validation / sanitization on the client-side for instance), but I think the advantages that come with Firebase outweigh that by far.

mikelehen | 13 years ago | on: Announcing Firepad — Our Open Source Collaborative Text Editor

Hey Saurik,

Thanks for the thorough and correct analysis as usual. :-)

The key things I would point out are that:

1) The checkpointing is an optimization. You could either remove it (which will hurt initial load time) or delegate it to trusted server code (which will be very lightweight; you could run hundreds of rooms off of a tiny EC2 instance or whatever).

2) In general, the whole point of collaborative editing is that you trust your collaborators. If they're malicious, they can already cause mayhem on your editing experience with constant edits, obscene content, etc.

page 1