top | item 1034846

Mongodb vs couchdb

21 points| nickaugust | 16 years ago | reply

Do you guys have any advice choosing between couchdb and mongodb?? My project is written in python and uses json documents to communicate with a javascript frontend. Runs on the tornado webserver.

15 comments

order
[+] jchrisa|16 years ago|reply
Couch and Mongo are optimized for different use cases.

CouchDB is pretty much the only (open-source) game in town if you care about offline replication. It is also designed for extreme reliability and concurrency. CouchDB's programming model is designed to scale from a smartphone to a datacenter, and ops teams can scale it strictly in the HTTP domain.

MongoDB trades robustness and concurrency for faster serial performance for single clients. MongoDB's programming model is closer to MySQL or redis, so it isn't as big a leap for developers used to traditional 3-tier architectures.

I expect to see a lot of apps move between the two platforms as people start to find the sweet spot that they are interested in.

[+] nickaugust|16 years ago|reply
I saw an old talk you did where you were advocating simply using ajax (jquery i think) to query couchdb directly from the front end. Is this something you still recommend? I don't think mongos security model would allow this.
[+] QuietPatient|16 years ago|reply
Unfortunately both are useless for mortals because there is no transactions and locks. It's just impossible to write applications where uniqueness and real ACID transactions are required. Please don't send me to CouchDB bulk document API, I don't wanna dance with conflicts and inconsistency every time I am going to write simple application with user registration for example. The same for apps with direct purchases where you need to update quantity of item in stock. Just impossible. Solutions with "inventory tickets" sounds insane. Unique fields as an _id also sounds insane, because it's impossible to create complex "unique keys".

Both are cool, seriously, and you know. Lots of developers are going to use this software, but every time the lack of such features just rejecting these people.

1. Add real transactions to CouchDB/Mongo 2. Add unique indexes to CouchDB. IIRC Mongo has already. 3. Add map/reduce chaining to CouchDB 4. Dominate the world!

[+] JulianMorrison|16 years ago|reply
There are transactions in MongoDB, they just wrap one operation on one document. But that one operation can be quite complex.

Example, one op can pick a money account, decrement its balance, and push an entry to the list of outbound transfers. After a few more similarly atomic steps, you have a safe and restartable money transfer.

The upside of working with this extra complication is that your data scales via sharding far beyond the maximum size of a MySQL installation.

[+] TomK32|16 years ago|reply
Oh, mortals do need transactions? Since when?
[+] knv|16 years ago|reply
I'm all for map/reduce chaining.
[+] jart|16 years ago|reply
I tested CouchDB 0.8 recently on Ubuntu Jaunty. I was very much impressed by its read performance and how it used javascript.

However I found that insert latency was very poor, and it could only about 16 inserts/second. My apps are generally write heavy (bulk not an option) so I felt it wouldn't be the right fit for me.

I also managed to accidentally cause CouchDB to use all my memory a few times. The fact that this was even possible out of the box would make me hesitant to use it in production.

[+] docmach|16 years ago|reply
CouchDB 0.8 is pretty old. If you tried the newest version your results might be significantly different.
[+] mattdennewitz|16 years ago|reply
ive been using mongo and tornado together for a while, and its great. i initially chose mongo over couch because of its fast ad hoc querying. the python client is also well-written and well-documented.

one thing, though, to watch out for is that only the "unstable" dev version of mongodb (1.3.x) has read concurrency - before 1.3.x, mongo uses a global read/write lock per operation. general and index-assisted reads are ultra-fast in mongo, but a bigger map/reduce or group call will block other requests until complete, possibly causing traffic to back up. because of that global lock, all writes block, too, but i've never had a problem with that IRL. writes are super-fast.

if you want an orm for mongo, check out mongoengine on github: http://github.com/hmarr/mongoengine/tree/master/mongoengine/

[+] vorador|16 years ago|reply
Or pymongo, which is really easy to use.
[+] paulogiann|16 years ago|reply
I 've been using Couchdb for a while, and now started experimenting with Mongodb. The latter seems more suitable for me as it seems I can atomically perform several operations at once. Couchdb on the other hand provides bulk updates, but the all_or_nothing option never fails so atomicity is not literally forced.
[+] schmichael|16 years ago|reply
MongoDB has bulk updates as well by setting multi=True in your updates.