I also want to point their Node.js transactions API is wrong and looks like they have no idea how promises or async code work in JS.
In mongo, you have a `withTransaction(fn)` helper that passes a session parameter. Mongo can call this function mutliple times with the same session object.
This means that if you have an async function with reference to a session and a transaction gets retried - you very often get "part of one attempt + some parts of another" committed.
We had to write a ton of logic around their poor implementation and I was shocked to see the code underneath.
It was just such a stark contrast to products that I worked with before that generally "just worked" like postgres, elasticsearch or redis. Even tools people joke about a lot like mysql never gave me this sort of data corruption.
Edit: I was kind of angry when writing this so I didn't provide a source and I'm a bit surprised this go so many upvotes without a source (I guess this community is more trusting than I assumed :] ). Anyway for good measure and to behave the way I'd like others to when making such accusations here is where they pass the same session object to the transacton https://github.com/mongodb/node-mongodb-native/blob/e5b762c6... (follow from withTransaction in that file) - I can add examples of code easily introducing the above mentioned bug if people are interested.
If you work for Mongo and are reading this. Please just fix it. I don't need to win and I don't care about being "right".
I just don't want to be called to the office on a weekend anymore for this sort of BS.
Production incidents with MongoDB last year: 15
Production instances with redis, elasticsearch and mysql combined last year: 2 (and with much less severity)
Edit: just to add: I didn't pick Mongo, I was just the engineer called to clean that mess. I created enough of my own messes to not resent the person who made that shot for it. We are constantly on the verge of rewriting the MongoDB stuff since a database that small (~250GB) should really not have these many issues (In previous workplaces I ran ~10TB PostgreSQL deployments with much more complicated schemas and queries with far fewer issues). It's also expensive and support at Mongo Atlas hasn't been great (we should probably self host but I am not used to small databases being so problematic)
MySQL is less of a joke than MongoDB is. They similarly started by someone who didn't know anything about databases and learned about them on the go. Actually both of them started as much faster alternatives to other databases, both ended up having complete rewrite of its engine written by someone from outside that knew their stuff. MySQL ISAM then MyISAM and then InnoDB (written by an outsider). Similarly MongoDB got a WiredTiger written.
The thing is that MySQL is older so it went through all of it earlier, but it still suffers from poor decisions from the past. This is contrasting with PostgreSQL, where correctness and reliability was #1 from the beginning. It started as an awfully slow database, but performance for improved and we now have correct, reliable and fast database.
When I was evaluating MongoDb couple of years ago (around the time they were switching to WiredTiger engine), I've found memory leak in their Node.js client on day one, I've submitted a ticket on their Jira and the same time I had a look at other issues they had there. I saw there memory leak after memory leak, memory corruption everywhere, data disappearing without any reason, segfaults etc. After that MongoDB was dropped as a candidate for a DB in project I was working on, we went with Postgres and never regretted it.
> Even tools people joke about a lot like mysql never gave me this sort of data corruption.
That's about a decade out of date at this point. MySQL/InnoDB is the standard table engine and corruption is exceedingly rare. As of 2014, when I last directly worked on MySQL prod systems, there was no practical difference from PostgreSQL in terms of transactional guarantees. That includes APIs like JDBC which we used for billions of transactions.
Hi folks! Author of the report here. If anyone has questions about detecting transactional anomalies, what those anomalies are in the first place, snapshot isolation, etc., I'm happy to answer as best I can.
Have you considered presenting the data in a concise manner in addition to the in-depth analyses?
That is, a table on the jepsen.io frontpage, or at least on each product's review page, with database products and configuration on rows and consistency properties on columns, and a nice "Yay!" or "Nope!" mark in the cell, plus links on how to achieve the database configurations in the table (esp. how to configure each database to have the most guarantees).
Also, ideally the analyses should be rerun automatically (or possibly after being paid, but making it easy for the company to do so) every time a new major release happens rather than being done once and then being stale.
Finally, there should be tests for the non-broken databases (PostgreSQL for instance, both in single-server mode, deployed with Stolon on Kubernetes and using the multimaster projects) as well to confirm they actually work.
Thank you for all of your work over the years. Your reports have helped me and others stand up to bizdev hype and make better decisions for our companies and customers.
Postgres is widely understood to be a robust database with safe defaults. I, and perhaps others, would love to see you aim your array of weapons at Postgres. Do you have any plans to look at stock Postgres?
Not a question necessarily about the technical side, but I'm interested in your opinion as to the root cause – is it desire to achieve certain results for marketing purposes, lack of understanding/training in the team about distributed systems, just bugs and a lack of testing...? Alternatively does most of this come down to one specific technical choice, and why might they have made that choice?
Very happy for (informed) speculation here, I recognise we'll probably never know for certain, but I'm interested to avoid making similar mistakes myself.
Hi Kyle, thanks for the Elle :) I want to use Elle to check long histories of transactions over small set of keys with read dominant workload, the paper recommends to use lists over registers but when the history becomes long on the one hand it becomes too wasteful to read the register's history on each request on the other hand the Elle's input becomes very large. E.g. when each read should return the whole register's history the size of history grows O(n^2) compared to the case when the reads return just the head.
So I'm curios how would you have described the ability of finding violations with Elle using read-write registers with unique values vs the append-only lists?
Huge fan of your work! I was curious if you've ever attempted to run your (or part of) Mongo test suite against FoundationDB using their DocumentLayer since it's supposed to be Mongo API compatible.
Hi Kyle! I’ve really enjoyed your work over the years. I was wondering, with all of your testing and experimentation, is there any system that had really impressed you?
Mongo has been related to "perpetual irritation" up to "major production issue" at all three of my last companies.
For as easy as it is to use jsonb in Postgres, or Redis, or RocksDB/SQLite, or whatever else depending on your use case - I can't find any reason to advocate its use these days. In my anecdotal experience, the success stories never happen, and nearly developer I know has an unpleasant experience they can share.
Big thanks to aphyr and the Jepsen suite (and unrelated blog posts like Hexing the Interview) for inspiring me to do thorough engineering.
I find that using JSON for things you don't need to query/validate (like big blobs you just want to store) and breaking the rest out to columns works well enough. Plus, you can always migrate the data out to a field anyway.
As an engineer for whom automated testing tools are crucial to my mental health, let me know if you want a UX tester or just someone to provide feedback on the documentation.
This article reinforces my stance that bad defaults are a bug. Defaults should be set up with the least number of pitfalls and safety tradeoffs possible so that the system is as robust as it can be for the majority of its users, since the vast majority of them aren't going to change the defaults.
Sometimes you end up with bad defaults simply by accident but I feel like for MongoDB the morally correct choice would be to own up to past mistakes and change the defaults rather than maintain a dangerous status quo for "backwards compatibility", even if you end up looking worse in benchmarks as a result.
I think this is a good way to look at things, and there are vendors who do this! VoltDB, for instance, changed their defaults to be strict serializable even though it imposed a performance hit, following their Jepsen analysis. https://www.voltdb.com/blog/2016/07/voltdb-6-4-passes-offici...
How many more years do we have to keep evaluating, studying, and reading about MongoDB's ongoing failures? It would appear this product has been a great burden on the community for many years.
I like to keep in mind that MongoDB's existing feature set is maturing--occasional regressions may happen, but by and large they're making progress. The problems in this analysis were in a transaction system that's only been around for a couple years, so it's had less time to have rough edges sanded off.
> Clients observed a monotonically growing list of elements until [1 2 3 5 4 6 7], at which point the list reset to [], and started afresh with [8]. This could be an example of MongoDB rollbacks, which is a fancy way of saying “data loss”.
I hope they learned the lesson, don't fuck with aphyr.
I wanted to incorporate MongoDB into a C++ server at one point.
Their C/C++ client is literally unusable. I went to look into writing my own that actually worked and their network protocols are almost impossible to understand. BSON is a wreck and basically the whole thing discouraged me from ever trying to interact with that project again.
Aphyr is such a competent professional. What a relatively thorough and polite response to Mongo's inaccurate claims. "We also wish to thank MongoDB’s Maxime Beugnet for inspiration." is a nice touch.
The general mood I observed about MongoDB was that it used to be inconsistent and unreliable but they fixed most, if not all of those problems and they now have a stable product but bad word of mouth among developers. Personally, I've treated it as "legacy" and migrated everything that I had to touch since 2013 [0], and luckily (just read the article so hindsight 20/20 -- transaction running twice and seeing its own updates? holy...) never gave it another try.
[0]: https://news.ycombinator.com/item?id=6801970 (BTW: no, my dream of simple migration never materialized, but exporting and dumping data to Postgres JSONB columns and rewriting queries turned out to be neither buggy nor hard).
> MongoDB was that it used to be inconsistent and unreliable but they fixed most, if not all of those problems and they now have a stable product but bad word of mouth among developers.
This report is 9 days old, and tests the latest stable release of MongoDB. The problems it discusses are present on modern MongoDB.
This is not directly related to this report or Jepsen, but since you're here I've got to ask: Aphyr, are there any recent papers/research in the realm of distributed databases which you're excited about?
I suppose there are reasons why the defaults are the way they are. Can anyone comment on the implications, performance or otherwise, of bumping up the read/write concerns?
Latency is a big one--you've got to wait an extra round-trip for secondaries to acknowledge primary writes, and primaries (assuming you don't have reliable clocks) need to check in with secondaries to confirm they have the most recent picture of things if you want to do a linearizable read. Snapshot isolated reads shouldn't require that, at least in theory--it's legal to read state from the past under SI, so there's no need to establish present leadership. That's why I'm surprised that MongoDB requires snapshot reads to go through write concern majority--it doesn't seem like it'd be necessary. Might have something to do with sharding--maybe establishing a consistent cut across shards requires a round of coordination. Even then I feel like that's a cost you should be able to pay only at write time, making reads fast, but... apparently not! I'm sure the MongoDB engineers who designed this system have good reasons; they're smart folks and understand the replication protocol much better than I do.
MongoDB's also published a writeup (which is cited a few times in the Jepsen report!) talking about the impact of stronger safety settings and why they choose weak defaults: http://www.vldb.org/pvldb/vol12/p2071-schultz.pdf
In general, MongoDB’s defaults fall into two categories. The first could possibly be justified as making it easy for inexperienced devs to get started, but it means that people rely on those defaults and then try to promote to production, and unless there is an experienced traditional DBA with the power to veto it, it will go ahead. This is how they “backdoor” their way into companies. The second category is whatever will look good on a benchmark, regardless of any corners cut.
Compare and contrast with the highly ethical Postgres team, who encourage good practices from the start and who get a feature right first before worrying about performance. That may harm their adoption in the short term but over the long term, that's why they're the gold standard. And with their JSONB datatype they have a better MongoDB than MongoDB anyway! And have a million other features besides!
[+] [-] dang|5 years ago|reply
[+] [-] inglor|5 years ago|reply
In mongo, you have a `withTransaction(fn)` helper that passes a session parameter. Mongo can call this function mutliple times with the same session object.
This means that if you have an async function with reference to a session and a transaction gets retried - you very often get "part of one attempt + some parts of another" committed.
We had to write a ton of logic around their poor implementation and I was shocked to see the code underneath.
It was just such a stark contrast to products that I worked with before that generally "just worked" like postgres, elasticsearch or redis. Even tools people joke about a lot like mysql never gave me this sort of data corruption.
Edit: I was kind of angry when writing this so I didn't provide a source and I'm a bit surprised this go so many upvotes without a source (I guess this community is more trusting than I assumed :] ). Anyway for good measure and to behave the way I'd like others to when making such accusations here is where they pass the same session object to the transacton https://github.com/mongodb/node-mongodb-native/blob/e5b762c6... (follow from withTransaction in that file) - I can add examples of code easily introducing the above mentioned bug if people are interested.
[+] [-] inglor|5 years ago|reply
I just don't want to be called to the office on a weekend anymore for this sort of BS.
Production incidents with MongoDB last year: 15 Production instances with redis, elasticsearch and mysql combined last year: 2 (and with much less severity)
Edit: just to add: I didn't pick Mongo, I was just the engineer called to clean that mess. I created enough of my own messes to not resent the person who made that shot for it. We are constantly on the verge of rewriting the MongoDB stuff since a database that small (~250GB) should really not have these many issues (In previous workplaces I ran ~10TB PostgreSQL deployments with much more complicated schemas and queries with far fewer issues). It's also expensive and support at Mongo Atlas hasn't been great (we should probably self host but I am not used to small databases being so problematic)
[+] [-] takeda|5 years ago|reply
The thing is that MySQL is older so it went through all of it earlier, but it still suffers from poor decisions from the past. This is contrasting with PostgreSQL, where correctness and reliability was #1 from the beginning. It started as an awfully slow database, but performance for improved and we now have correct, reliable and fast database.
[+] [-] jfkebwjsbx|5 years ago|reply
People rightfully joked about MySQL when they had the non-ACID engine.
Same for MongoDB. A database that loses data when properly used is a joke.
Yes, there are use cases out there for fast non-guaranteed writes. No, 99% of companies don’t have them.
[+] [-] lossolo|5 years ago|reply
[+] [-] xeromal|5 years ago|reply
https://www.youtube.com/watch?v=b2F-DItXtZs
[+] [-] hintymad|5 years ago|reply
[+] [-] hodgesrm|5 years ago|reply
That's about a decade out of date at this point. MySQL/InnoDB is the standard table engine and corruption is exceedingly rare. As of 2014, when I last directly worked on MySQL prod systems, there was no practical difference from PostgreSQL in terms of transactional guarantees. That includes APIs like JDBC which we used for billions of transactions.
[+] [-] vorticalbox|5 years ago|reply
isn't that the point? you can use a session to do multi actions within that session.
[+] [-] g0ldenb0ugh|5 years ago|reply
Just wondering, did you submit a bug report to them about this? If so, any response?
[+] [-] aphyr|5 years ago|reply
[+] [-] devit|5 years ago|reply
That is, a table on the jepsen.io frontpage, or at least on each product's review page, with database products and configuration on rows and consistency properties on columns, and a nice "Yay!" or "Nope!" mark in the cell, plus links on how to achieve the database configurations in the table (esp. how to configure each database to have the most guarantees).
Also, ideally the analyses should be rerun automatically (or possibly after being paid, but making it easy for the company to do so) every time a new major release happens rather than being done once and then being stale.
Finally, there should be tests for the non-broken databases (PostgreSQL for instance, both in single-server mode, deployed with Stolon on Kubernetes and using the multimaster projects) as well to confirm they actually work.
[+] [-] politician|5 years ago|reply
Postgres is widely understood to be a robust database with safe defaults. I, and perhaps others, would love to see you aim your array of weapons at Postgres. Do you have any plans to look at stock Postgres?
[+] [-] danpalmer|5 years ago|reply
Very happy for (informed) speculation here, I recognise we'll probably never know for certain, but I'm interested to avoid making similar mistakes myself.
[+] [-] eternalban|5 years ago|reply
This section seems to be the most worrying results in your report, Kyle, with no work around. Did I read that correctly?
[+] [-] rystsov|5 years ago|reply
So I'm curios how would you have described the ability of finding violations with Elle using read-write registers with unique values vs the append-only lists?
[+] [-] monstrado|5 years ago|reply
[+] [-] teskk123|5 years ago|reply
[+] [-] rclayton|5 years ago|reply
[+] [-] staticassertion|5 years ago|reply
[+] [-] dilandau|5 years ago|reply
[+] [-] lllr_finger|5 years ago|reply
For as easy as it is to use jsonb in Postgres, or Redis, or RocksDB/SQLite, or whatever else depending on your use case - I can't find any reason to advocate its use these days. In my anecdotal experience, the success stories never happen, and nearly developer I know has an unpleasant experience they can share.
Big thanks to aphyr and the Jepsen suite (and unrelated blog posts like Hexing the Interview) for inspiring me to do thorough engineering.
[+] [-] StavrosK|5 years ago|reply
[+] [-] ep103|5 years ago|reply
Anyone have any suggestions for a true non-MongoDB jsonDocument based noSql option?
[+] [-] mtrycz2|5 years ago|reply
[deleted]
[+] [-] mtrycz2|5 years ago|reply
Your attitude of "a tool I need doesn't exists, so I'll just go ahead and create it" blew my mind and changed me for the better.
I'm dedicating my next test framework to you. Thank you for everything.
[+] [-] aphyr|5 years ago|reply
[+] [-] afarrell|5 years ago|reply
As an engineer for whom automated testing tools are crucial to my mental health, let me know if you want a UX tester or just someone to provide feedback on the documentation.
[+] [-] chousuke|5 years ago|reply
Sometimes you end up with bad defaults simply by accident but I feel like for MongoDB the morally correct choice would be to own up to past mistakes and change the defaults rather than maintain a dangerous status quo for "backwards compatibility", even if you end up looking worse in benchmarks as a result.
[+] [-] aphyr|5 years ago|reply
[+] [-] zzzeek|5 years ago|reply
[+] [-] aphyr|5 years ago|reply
[+] [-] bithavoc|5 years ago|reply
I hope they learned the lesson, don't fuck with aphyr.
[+] [-] amenod|5 years ago|reply
[+] [-] junon|5 years ago|reply
Their C/C++ client is literally unusable. I went to look into writing my own that actually worked and their network protocols are almost impossible to understand. BSON is a wreck and basically the whole thing discouraged me from ever trying to interact with that project again.
[+] [-] loeg|5 years ago|reply
[+] [-] egeozcan|5 years ago|reply
[0]: https://news.ycombinator.com/item?id=6801970 (BTW: no, my dream of simple migration never materialized, but exporting and dumping data to Postgres JSONB columns and rewriting queries turned out to be neither buggy nor hard).
[+] [-] cyphar|5 years ago|reply
This report is 9 days old, and tests the latest stable release of MongoDB. The problems it discusses are present on modern MongoDB.
[+] [-] judofyr|5 years ago|reply
[+] [-] inglor|5 years ago|reply
[+] [-] nevi-me|5 years ago|reply
[+] [-] azernik|5 years ago|reply
[+] [-] sam1r|5 years ago|reply
I wonder if someone can type up a well-manicured post-Morten of the recent triple byte incident?
[+] [-] depr|5 years ago|reply
I understood that reference
[+] [-] rmdashrfstar|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] sorokod|5 years ago|reply
[+] [-] aphyr|5 years ago|reply
MongoDB's also published a writeup (which is cited a few times in the Jepsen report!) talking about the impact of stronger safety settings and why they choose weak defaults: http://www.vldb.org/pvldb/vol12/p2071-schultz.pdf
[+] [-] goatinaboat|5 years ago|reply
Compare and contrast with the highly ethical Postgres team, who encourage good practices from the start and who get a feature right first before worrying about performance. That may harm their adoption in the short term but over the long term, that's why they're the gold standard. And with their JSONB datatype they have a better MongoDB than MongoDB anyway! And have a million other features besides!