top | item 25767128

An Unlikely Database Migration

364 points| ifcologne | 5 years ago |tailscale.com

178 comments

order

judofyr|5 years ago

Interesting choice of technology, but you didn't completely convince me to why this is better than just using SQLite or PostgreSQL with a lagging replica. (You could probably start with either one and easily migrate to the other one if needed.)

In particular you've designed a very complicated system: Operationally you need an etcd cluster and a tailetc cluster. Code-wise you now have to maintain your own transaction-aware caching layer on top of etcd (https://github.com/tailscale/tailetc/blob/main/tailetc.go). That's quite a brave task considering how many databases fail at Jepsen. Have you tried running Jepsen tests on tailetc yourself? You also mentioned a secondary index system which I assume is built on top of tailetc again? How does that interact with tailetc?

Considering that high-availability was not a requirement and that the main problem with the previous solution was performance ("writes went from nearly a second (sometimes worse!) to milliseconds") it looks like a simple server with SQLite + some indexes could have gotten you quite far.

We don't really get the full overview from a short blog post like this though so maybe it turns out to be a great solution for you. The code quality itself looks great and it seems that you have thought about all of the hard problems.

bradfitz|5 years ago

> and a tailetc cluster

What do you mean by this part? tailetc is a library used by the client of etcd.

Running an etcd cluster is much easier than running an HA PostgreSQL or MySQL config. (I previously made LiveJournal and ran its massively sharded HA MySQL setup)

endymi0n|5 years ago

This is about spot on. I do get the part about testability, but with a simple Key/Value use case like this, BoltDB or Pebble might have fit extremely well into the Native Golang paradigm as a backing store for the in-memory maps while not needing nearly as much custom code.

Plus maybe replacing the sync.Mutex with RWMutexes for optimum read performance in a seldom-write use case.

On the other hand again, I feel a bit weird criticizing Brad Fitzpatrick ;-) — so there might be other things at play I don‘t have a clue about...

segmondy|5 years ago

if you want a distributed key/value data store, you want to use what's already out there and vetted. It use to be zookeeper, but etcd is much simpler and that's what Kubernetes uses and it has been a big success and proved itself out there in the field. Definitely easier than a full SQL database which is overkill and much harder to replicate especially if you want to have a cluster of >= 3. Again, key is "distributed" and that immediately rules out sqlite.

strken|5 years ago

I was initially baffled by the choice of technology too. Part of it is that etcd is apparently much faster at handling writes, and offers more flexibility with regards to consistency, than I remember. Part of it might be that I don't understand the durability guarantees they're after, the gotchas they can avoid (e.g. transactions), or their overall architecture.

jeff-davis|5 years ago

This post illustrates the difference between persistence and a database.

If you are expecting to simply persist one instance of one application's state across different runs and failures, a database can be frustrating.

But if you want to manage your data across different versions of an app, different apps accessing the same data, or concurrent access, then a database will save you a lot of headaches.

The trick is knowing which one you want. Persistence is tempting, so a lot of people fool themselves into going that direction, and it can be pretty painful.

I like to say that rollback is the killer feature of SQL. A single request fails (e.g. unique violation), and the overall app keeps going, handling other requests. You application code can be pretty bad, and you can still have a good service. That's why PHP was awesome despite being bad -- SQL made it good (except for all the security pitfalls of PHP, which the DB couldn't help with).

perlgeek|5 years ago

I'd say the universal query capability is the killer feature of SQL.

In the OP they spent two weeks designing and implementing transaction-save indexes -- something that all major SQL RDBMS (and even many NoSQL solutions) have out of the box.

mrkstu|5 years ago

Maybe also part of the success of Rails? Similarly an easy to engineer veneer atop a database.

psankar|5 years ago

This comment helped me understand the problem and the solution better (along with a few followup tweets by the tailscale engineers). Thanks.

petters|5 years ago

> (Attempts to avoid this with ORMs usually replace an annoying amount of typing with an annoying amount of magic and loss of efficiency.)

Loss of efficiency? Come on, you were using a file before! :-)

Article makes me glad I'm using Django. Just set up a managed Postgres instance in AWS and be done with it. Sqlite for testing locally. Just works and very little engineering time spent on persistent storage.

Note: I do realize Brad is a very, very good engineer.

lmilcin|5 years ago

Efficiency can be measured in many different ways.

Having no dedicated database server or even database instance, being able to persist data to disk with almost no additional memory required, marginal amount of CPU and no heavy application dependencies can be considered very efficient depending on context.

Of course, if you start doing this on every change, many times a second, then it stops being efficient. But then there are ways to fix it without invoking Oracle or MongoDB or other beast.

When I worked on algorithmic trading framework the persistence was just two pointers in memory pointing to end of persisted and end of published region. Occasionally those pointers would be sent over to a dedicated CPU core that would be actually the only core talking to the operating system, and it would just append that data to a file and publish completion so that the other core can update the pointers.

The application would never read the file (the latency even to SSD is such that it could just as well be on the Moon) and the file was used to be able to retrace trading session and to bring up the application from event log in case it failed mid session.

As the data was nicely placed in order in the file, the entire process of reading that "database" would take no more than 1,5s, after which the application would be ready to synchronize with trading session again.

robben1234|5 years ago

>Article makes me glad I'm using Django.

This was my main thought throughout reading it. So many things to consider and difficult issues to solve it seems they face a self-made database hell. Makes me appreciate the simplicity and stable performance of django orm + postgre.

0xbadcafebee|5 years ago

I am missing a lot of context from this post because this just sounds nonsensical.

First they're conflating storage with transport. SQL databases are a storage and query system. They're intended to be slow, but efficient, like a bodybuilder. You don't ask a bodybuilder to run the 500m dash.

Second, they had a 150MB dataset, and they moved to... a distributed decentralized key-value store? They went from the simplest thing imaginable to the most complicated thing imaginable. I guess SQL is just complex in a direct way, and etcd is complex in an indirect way. But the end results of both are drastically different. And doesn't etcd have a whole lot of functional limitations SQL databases don't? Not to mention its dependence on gRPC makes it a PITA to work with REST APIs. Consul has a much better general-purpose design, imo.

And more of it doesn't make sense. Is this a backend component? Client side, server side? Why was it using JSON if resources mattered (you coulda saved like 20% of that 150MB with something less bloated). Why a single process? Why global locks? Like, I really don't understand the implementation at all. It seems like they threw away a common-sense solution to make a weird toy.

bradfitz|5 years ago

I'd answer questions but I'm not sure where to start.

I think we're pretty well aware of the pros and cons of all the options and between the team members designing this we have pretty good experience with all of them. But it's entirely possible we didn't communicate the design constraints well enough. (More: https://news.ycombinator.com/item?id=25769320)

Our data's tiny. We don't want to do anything to access it. It's nice just having it in memory always.

Architecturally, see https://news.ycombinator.com/item?id=25768146

JSON vs compressed JSON isn't the point: see https://news.ycombinator.com/item?id=25768771 and my reply to it.

malisper|5 years ago

The post touches upon it, but I didn't really understand the point. Why doesn't synchronous replication in Postgres work for this use case? With synchronous replication you have a primary and secondary. Your queries go to the primary and the secondary is guaranteed to be at least as up to date as the primary. That way if the primary goes down, you can query the secondary instead and not lose any data.

bradfitz|5 years ago

We could've done that. We could've also used DRBD, etc. But then we still have the SQL/ORM/testing latency/dependency problems.

bob1029|5 years ago

I do like putting .json files on disk when it makes sense, as this is a one-liner to serialize both ways in .NET/C#. But, once you hit that wall of wanting to select subsets of data because the total dataset got larger than your CPU cache (or some other step-wise NUMA constraint)... It's time for a little bit more structure. I would have just gone with SQLite to start. If I am not serializing a singleton out to disk, I reach for SQLite by default.

Cthulhu_|5 years ago

I've seen the same when at one point we decided to just store most data in a JSON blob in the database, since "we will only read and write by ID anyway". Until we didn't, sigh. At least Postgres had JSON primitives for basic querying.

The real problem with that project was of course trying to set up a microservices architecture where it wasn't necessary yet and nobody had the right level of experience and critical thinking to determine where to separate the services.

miki123211|5 years ago

I use the same system (a JSON file protected with a mutex) for an internal tool I wrote, and it works great. For us, file size or request count is not a concern, it's serving a couple (internal) users per minute at peak loads, the JSON is about 150 kb after half a year, and old data could easily be deleted/archived if need be.

This tool needs to insert data in the middle of (pretty short) lists, using a pretty complicated algorithm to calculate the position to insert at. If I had used an RDBMS, I'd probably have to implement fractional indexes, or at least change the IDs of all the entries following the newly inserted one, and that would be a lot of code to write. This way, I just copy part of the old slice, insert the new item, copy the other part (which are very easy operations in Go), and then write the whole thing out to JSON.

I kept it simple, stupid, and I'm very happy I went with that decision. Sometimes you don't need a database after all.

Quekid5|5 years ago

As long as you're guaranteeing correctness[0], it's hard to disagree with the "simple" approach. As long as you don't over-promise or under-deliver, there's no problem, AFAICS.

[0] Via mutex in your case. Have you thought about durability, though. That one's actually weirdly difficult to guarantee...

AlfeG|5 years ago

That's good. But single file could break on powerloss. I use sqllite. It's quite easy to use, not a single line though.

gfody|5 years ago

> The file reached a peak size of 150MB

is this a typo? 150MB is such a minuscule amount of data that you could do pretty much anything and be OK.

bradfitz|5 years ago

Not a typo. See why we're holding it all in RAM?

But writing out 150MB many times per second isn't super nice when both 150MB and the number of times per second are both growing.

cbushko|5 years ago

I think a lot of people are missing the point that a traditional DB (MYSQL/Postgress) are not a good fit for this scenario. This isn't a CRUD application but is instead a distributed control plane with a lot of reads and a small dataset. Joins and complex queries are not needed in this case as the data is simple.

I am also going to go out on a limb and guess that this is all running in kubernetes. Running etcd there is dead simple compared to even running something like Postgress.

Congrats on a well engineered solution that you can easily test on a dev machine. Running a DB in a docker container isn't difficult but it is just one more dev environment nuance that needs to be maintained.

bradfitz|5 years ago

We don't use Kubernetes (or even Docker) currently.

dekhn|5 years ago

I never took a course in databases. At some point I was expected to store some data for a webserver, looked as the BSDDB API, and went straight to mysql (this was in ~2000). I spent the time to read the manual on how to do CRUD but didn't really look at indices or anything exotic. The webserver just wrote raw SQL queries against an ultra-simple schema, storing lunch orders. It's worked for a good 20 years and only needed minor data updates when the vendor changed and small python syntax changes to move to python3.

At that point I thought "hmm, i guess I know databases" and a few years later, attempted to store some slightly larger, more complicated data in MySQL and query it. My query was basically "join every record in this table against itself, returning only rows that satisfy some filter". It ran incredibly slowly, but it turned out our lab secretary was actually an ex-IBM Database Engineer, and she said "did you try sorting the data first?" One call to strace showed that MySQL was doing a very inefficient full table scan for each row, but by inserting the data in sorted order, the query ran much faster. Uh, OK. I can't repeat the result, so I expect MySQL fixed it at some point. She showed me the sorts of DBs "real professionals" designed- it was a third order normal form menu ordering system for an early meal delivery website (wayyyyy ahead of its time. food.com). At that point I realized that there was obviously something I didn't know about databases, in particular that there was an entire schema theory on how to structure knowledge to take advantage of the features that databases have.

My next real experience with databases came when I was hired to help run Google's MySQL databases. Google's Ads DB was implemented as a collection of mysql primaries with many local and remote replicas. It was a beast to run, required many trained engineers, and never used any truly clever techniques, since the database was sharded so nobody could really do any interesting joins.

I gained a ton of appreciation for MySQL's capabilities from that experience but I can't say I really enjoy MySQL as a system. I like PostgresQL much better; it feels like a grownup database.

What I can say is that after all this experience, and some recent work with ORMs, has led me to believe that while the SQL query model is very powerful, and RDBMS are very powerful, you basically have to fully buy into the mental model and retain some serious engineering talent- folks who understand database index disk structures, multithreading, etc, etc.

For everybody else, a simple single-machine on-disk key-value store with no schema is probably the best thing you can do.

JacobiX|5 years ago

After reading the comments and the blog post, I think that the requirements boils down to fast persistence to disk, minimum dependencies and fast test-runs. Fortunately the data is very small 150MB and it fits very easily in memory. According to the post the data changes often so they need to write the data many times in a second. But I'm not sure why do they need to flush every time the entire 150MB ? Why not structure the files/indexes such that we write only the modified data ?

shapiro92|5 years ago

I find the article a bit hard to follow. What were the actual requirements? I probably didnt understand all of this, but was the time spent on thinking about this more valuable than using a KV Store?

bradfitz|5 years ago

Yeah, I guess we could've laid that out earlier:

* our data is tiny and fits in RAM

* our data changes often

* we want to eventually get to an HA setup (3-5 etcd nodes now, a handful of backend server instances later)

* we want to be able to do blue/green deploys of our backend control server

* we want tests to run incredibly quickly (our current 8 seconds for all tests is too slow)

* we don't want all engineers to have to install Docker or specific DB versions on their dev machines

256dpi|5 years ago

I found myself in a similar situation sometime ago with MongoDB. In one project my unit tests started slowing me down too much to be productive. In another, I had so little data that running a server alongside it was a waste of resources. I invested a couple of weeks in developing a SQLite type of library[1] for Go that implemented the official Go drivers API with a small wrapper to select between the two. Up until now, it paid huge dividends in both projects ongoing simplicity and was totally worth the investment.

[1]: https://github.com/256dpi/lungo

cavisne|5 years ago

Honestly this feels like engineers that spent too long in FANG and get completely burnt out on dealing with SRE and HA requirements... so decide to built a setup so prone to 2AM pages even a PHP webshop would frown at it.

Curiously though its a pattern I've seen twice in the last 12 months, there was that guide on the good bits of AWS that also recommended starting with a single host with everything running on it.

Maybe we should all move that host back under our desks and really be back to basics!

sombremesa|5 years ago

For me the most shocking part was that this company is spending time and money on overengineering solutions when they know that a JSON file got them that far — SQLite would be a perfectly fine improvement to get themselves further!

I had no idea companies of this size had engineers with that much free time on their hands.

tgtweak|5 years ago

Philosophically: I've seen some "bespoke" systems like this that live long enough for a nice off-the-shelf solution to come around that solves the problem much more elegantly and efficiently than the "bespoke" one does. This seems like a normal and dare I say organic path for these kind of systems to take.

I don't even mind senior devs putting together things like this at the cornerstone of the company provided there are always at any given point in time 2 people that know how it works and can work on it, and sufficient time was spent looking at existing solutions to make that call. It should be made with full expectations that the first paragraph is inevitable.

Specifically, in this case: Without any actual data (# of reads, # of writes, size of writes, size of data, read patterns, consistency requirements) it is not possible to judge whether going custom on such a system was merited or not. I would find it VERY difficult to come to the conclusion that this use case couldn't be solved with very common tooling such as spark and/or nats-streaming. "provided the entire dataset fits in memory" is a very large design liberty when designing such a solution and doesn't scream "scalability" or n+1 node write-consistency to me. I say this acknowledging full well that etcd is an unbelievably well written piece of software with durability and speed beyond it's years.

Keeping my eyes open for that post-series-a-post-mortem post.

mslm|5 years ago

[deleted]

dekhn|5 years ago

"""Even with fast NVMe drives and splitting the database into two halves (important data vs. ephemeral data that we could lose on a tmpfs), things got slower and slower. We knew the day would come. The file reached a peak size of 150MB and we were writing it as quickly as the disk I/O would let us. Ain’t that just peachy?"""

Uh, you compressed it first, right? because CPUs can compress data faster than disk I/O.

bradfitz|5 years ago

Yeah, I think it was zstd.

But the bigger problem was the curve. Doing something O(N*M) where N (file size) and M (writes per second) were both growing was not a winning strategy, compression or not.

jeffbee|5 years ago

Hrmm. Even lz4 level 1 compresses at "only" about 500-1000MB/s on various CPU types, which isn't quiet as fast as NVMe devices demand.

hankchinaski|5 years ago

seems like the reason for not going the MySQL/PSQL/DMBS route is lack of good Go libraries to handle relational databases (ORM/migration/testing)? from the story it seems more like a solution in search for a problem

bradfitz|5 years ago

I thought we adequately addressed all those points in the article? Search for ORM and dockertest.

_mjpassing_|5 years ago

How do you come to this conclusion from the article? Or is it just an attempt to discredit Go?

bullen|5 years ago

I also use JSON files... but one per value! It has it's ups and downs: pro it's incredibly fast and scales like a monster. con it's uses alot of space and inodes, so better use type small with ext4!

The only feature it misses is to compress the data that is not actively in use, that way there is really not much of a downside.

http://root.rupy.se

manigandham|5 years ago

> "Attempts to avoid this with ORMs usually replace an annoying amount of typing with an annoying amount of magic and loss of efficiency."

People seem to keep using poorly-designed ORMs or are stuck with some strange anti-ORM ideology.

Modern ORMs are fast, efficient, and very productive. If you're working with relational databases then you're using an ORM. It's a question of whether you use something prebuilt or write it yourself (since those objects have to be mapped to the database somehow). 99% of the time, ORMs generate perfectly fine SQL (if not exactly what you'd type anyway) while handling connections, security, mapping, batching, transactions, and other issues inherent in database calls.

The 1% of the time you need something more complex, you can always switch to manual SQL (and ORMs will even let you run that SQL while handling the rest as usual). The overall advantages massively outweigh any negatives, if they even apply to your project.

nine_k|5 years ago

IMHO, ORMs are a wrong abstraction. Full-fledged objects are a wrong abstraction of the data in more cases than not.

The right tool is a wrapper / DSL over SQL, which allows to interact with the database in a predictable and efficient way, while not writing placeholder-ridden SQL by hand. Composable, typesafe, readable.

ORMs do fine in small applications without performance requirements. The further you go from that, the less adequate an ORM becomes, and the more you have to sidestep and even fight it, in my experience.

MrStonedOne|5 years ago

Modern ORMs don't let you hand craft sql, shunting it away in some scary extension that you then have to fight when management on using.

A ORM that worked on the principle of insert query text of any complexity receive object as the primary usecase, not the "nonstandard and non-idiomatic usecase" would be the only way to ease the concerns of dba's who code like me.

Its the same pitfall of api clients. Why would I take the time to learn an api like its a sdk, along with the pains of trying to shunt openapi's libraries in to my application without requiring the creation of a composer build step, further complicating deployment, when I can make 5 methods in the time before lunch to do the bits i need as rest queries and deployment of my php app remains as simple as `git pull production` on the nfs share all the workers read from?

The benefit of compile validated symbols is moot in the days of test driven dev, so the benefits gained from that can still be realized without creating build complexity or making competent engineers re-learn something they already know only re-abstracted in a way that almost always makes it harder for somebody who understands the low level to learn the new way compared to a new dev.

Cthulhu_|5 years ago

> Modern ORMs are fast, efficient, and very productive.

The author seems to be using Go, which honestly could use work in that area. gorm is the biggest / most popular ORM out there, but it looks like a one-person project, the author seems well worn-out already, and it kinda falls apart when you work with a data model more than one level deep.

Plus broadly speaking, there seems to be a bit of an aversion to using libraries in the Go community.

gher-shyu3i|5 years ago

> Modern ORMs are fast, efficient, and very productive

Which ones do you have experience with?

jamescun|5 years ago

This post touches on "innovation tokens". While I agree with the premise of "choose boring technology", it feels like a process smell, particularly of a startup whose goal is to innovate a techology. Feels demotivating as an engineer if management says our team can only innovate an arbitrary N times.

crawshaw|5 years ago

It is a tricky tradeoff for startups. On the one hand, a startup has very limited resources and so has to focus on the business. On the other hand, a startup has to experiment to find the business. I don't think there's an easy answer.

In our case, the control plane data store really should be as boring as possible. It was real stretch using anything other than MySQL. We tried to lay out the arguments in the post, but the most compelling was we had lots of unit tests that spun up a DB and shut it down quickly. Maybe a hundred tests whose total execution time was 1.5s. The "boring" options made that surprisingly difficult.

(Tailscaler and blog post co-author)

sbierwagen|5 years ago

What is the purpose of a startup? Is it to put keywords on your resume, or is it to create a product?

marcinzm|5 years ago

I mean, what do you propose in terms of getting a diverse group of engineers to trade off innovation versus choosing boring technologies? Keeping in mind that how much risk the company is willing to take is not the decision of engineers but the executive team. Innovation tokens convey the level of risk the executive team is willing to take in terms of technologies. The alternative I've often seen is a dictatorial CTO (or VP of Eng) who simply says NO a lot which is a lot more demotivating. A large company may do detailed risk analyses but those are too cumbersome for a startup.

rhizome|5 years ago

Every decision to increase platform spread should be justified in detail, as the implementation and support overhead is essentially unbounded. Be careful you don't buy a pig in a poke.

5ersi|5 years ago

Ahh, a classic case of "if you don't understand it, you are bound to reimplement it".

euske|5 years ago

My takeaway from the OP:

> Never underestimate how long your “temporary” hack will stay in production!

Cthulhu_|5 years ago

Nothing as permanent as a temporary solution.

ropable|5 years ago

Tangentially, this article makes me so very glad that our own work projects are all making use of the Django ORM. Database migrations and general usage have been a non-issue for literally years.

ed25519FUUU|5 years ago

> “Yeah, whenever something changes, we grab a lock in our single process and rewrite out the file!”

Is this actually easier than using SQLite?

darren0|5 years ago

Without knowing it, they reinvented the Kubernetes informer which I've proven can scale way past their current scale.

breck|5 years ago

If you liked this I highly recommend "Clean Architecture" by Uncle Bob. He has a great story of how they kept putting off switching to a SQL DB on a project and then never needed to and it was a big success anyway.

CleverLikeAnOx|5 years ago

I thoroughly enjoyed "Clean Architecture." I engage with all of Uncle Bob's work in an adversarial manner: he is trying to convince me that he is right and I am playing devil's advocate. This helped me gain insight from his work, develop my own thoughts on architecture, and avoid getting too caught up in doing things the "one true way."

jbverschoor|5 years ago

I think uncle bob is the only one who makes sense

francoisp|5 years ago

Looks like a clear case of migration to postgres. A single table with jsonb. You can do indexes on jsonb and a few plpgsql for partial updates, and forget about it for a while.

toddh|5 years ago

So what happens when the computer fails in the middle of a long write to disk?

npiit|5 years ago

[deleted]

dang|5 years ago

That's not true. I deal with shady marketing every day, I know from shady marketing, this is not that, and I don't like seeing people unjustly accused. Please stop creating accounts to spread this falsehood.

hfern|5 years ago

I don't think it's shady marketing. The company is composed of several prominent software engineers (eg Brad Fitzpatrick of memcached fame). I think many folks on HN (myself included) are interested to see what they are working on. Especially as Brad left Google to work on it.

Foxboron|5 years ago

>EDIT: Why is the downvoting? the post was give like 9 upvotes in the first 5 minutes. I frequently go to "new" and this is a highly suspicious behaviour.

They are popular people so people submit the link. Once duplicate links are submitted there is an upvote on the first submission. No duplicates.

Source: I was the second upvote.

VikingCoder|5 years ago

I upvote literally everything I see from Tailscale, because I am hugely impressed with Avery Pennarun and David Crawshaw. Seeing Fitz is there too takes it to an even higher level. I want them to succeed, and I think they create good HN content. Major kudos.

gk1|5 years ago

What is suspicious about the number 36? (The upvote count at this moment.)

Just because you dislike the product (and it's clear you do) does not prevent others from liking it, or at least finding their articles interesting.

detaro|5 years ago

> EDIT: Why is the downvoting?

Because this kind of thing is something you should contact the mods about, not leave comments that nobody can really (dis-)prove

cbushko|5 years ago

I upvoted because the content was technical, well written and different than what we normally see on HN.

joshxyz|5 years ago

Its ok im part of audience that doesnt know much about databases its fun to read discussions about them once in a while.

I learned a lot about postgresql redis clickhouse and elasticsearch here, people's perspectives here are great to learn from, they tell you which to avoid and which to try.

nwsm|5 years ago

Is tailscale downvoting you as well?

jbverschoor|5 years ago

Doesn’t sound like smart thing to do and sounds more like a js dev/student discovering step by step why sql databases are so popular..

Probably not so, bc tailscale is a decent product, but this post did not change my view in a good way

jclulow|5 years ago

On the contrary, it sounds like a seasoned group of people who understand their needs and are wary of the very real challenges presented by most existing SQL systems with respect to deployment, testing, and especially fault tolerance. I'm interested in Cockroach myself, but I also acknowledge it's relatively new, and itself a large and complicated body of software, and choosing it represents a risk.

worik|5 years ago

This jumped out at me: "The obvious next step to take was to move to SQL"

No. Not unless your data is relational. This is a common problem, relational databases have a lot of over head. They are worth it when dealing with relational data. Not so much with non relational data.

bradfitz|5 years ago

Maybe "obvious" was the wrong word there. What we meant to say was that would be the typical route people would go, doing the common, well-known, risk-averse thing.

nautilus12|5 years ago

The premise of articles like this annoys me. It reeks of "we are too smart to use databases", "json is good enough for us", when anyone that works with data to any large extent knows that json is just a pain and we only have to deal with it because the front end is enamored with because "readable" and "javascript".