top | item 7553442

MongoDB 2.6 Released

143 points| francesca | 12 years ago |blog.mongodb.org

117 comments

order
[+] danielrpa|12 years ago|reply
A lot of hype... And we still have db level locking. If document level is too difficult, at LEAST do collection level (not that it is too much better, but least it some real improvement).
[+] z92|12 years ago|reply
I was reading somewhere that Mongo can't do document level/record level locking because of mmap'ed files. The whole database is memory mapped. And mmap doesn't understand underlined data structure, it views the whole file as a large single blob.

Ditching mmap will not be that easy, cause most of the speed and simplicity of Mongo comes from using mmap.

[+] sigzero|12 years ago|reply
"Finally, MongoDB 2.6 lays the foundation for massive improvements to concurrency in MongoDB 2.8, including document-level locking."

They hear you and are working on it.

[+] rogerbinns|12 years ago|reply
Heck I'd even be happy for them to do locking on a per file basis. (For those unfamiliar with mongodb, the actual database storage is broken up into 2GB files.)
[+] disbelief|12 years ago|reply
Every Mongo release announcement I hold out hope for an improvement to database-level write lock, and every time I'm disappointing. Considering this is probably the #1 or #2 complaint people have with Mongo, I'm surprised it hasn't been addressed ahead of some of the other features that have made it in to recent releases.
[+] Kiro|12 years ago|reply
I don't know anything about MongoDB but can you give an example where DB level locking is a problem?
[+] fiatmoney|12 years ago|reply
And it still relies entirely on the OS's scheduling algorithms for caching and IO. mmap is nice but it has no idea you're running a database.
[+] ChrisGaudreau|12 years ago|reply
Finally, MongoDB 2.6 lays the foundation for massive improvements to concurrency in MongoDB 2.8, including document-level locking.

This is exciting even if I don't expect it to happen soon.

[+] zardosht|12 years ago|reply
TokuMX, which I work on, has document level locking and compression right now.
[+] anvarik|12 years ago|reply
With this release aggregation framework got super powerful. Now it returns a cursor. Now we can get the aggregation results and iterate over them. No more 16mb result limitation as well...
[+] ericcholis|12 years ago|reply
Agreed, it was annoying to have to use the MapReduce for larger sets.
[+] jeffdavis|12 years ago|reply
Can you elaborate for people who don't understand the feature? Is the aggregation result large because of grouping?
[+] nicklovescode|12 years ago|reply
I've been using Elasticsearch as a primary database for my new project, which has basically been a good NoSQL db that happens to have great search. However, peripheral tools(performance testing, hosting) have been a bit rough.

How do the two databases compare now. Is search improving in Mongo or is that something they are not really worrying about at the moment.

[+] endijs|12 years ago|reply
This is good write-up about what's new with actual numbers. http://devops.com/news/mongodb-2-6-significant-release-mongo... Quote: "MongoDB 2.6 provides more efficient use of network resources; oplog processing is 75% faster; classes of scan, sort, $in and $all performance are significantly improved; and bulk operators for writes improve updates by as much as 5x."
[+] dkhenry|12 years ago|reply
Awesome news. I am excited for the aggregation cursor. As much as I love some of the alternatives that are almost ready I still turn to mongo for a vast majority of my deployments. Hopefully it will keep getting better and pushing others to do the same.
[+] brokentone|12 years ago|reply
Can someone with more MongoDB experience give me your thoughts on the upgrade difficulty here? Worth doing soon, or waiting for a point release? Does this require a data rebuild/update process (coming from 2.4)?
[+] mason55|12 years ago|reply
IMO unless you desperately need one of the new features I would hold off a few weeks. With a release this big I'd expect there will be some bugs and wouldn't be surprised to see 2.6.1 shortly.
[+] AdrianRossouw|12 years ago|reply
So i've been trying to find an the ideal case for mongodb, because I have to teach a nosql database to some people I am mentoring.

I'm leaning heavily towards couchdb though.

http://daemon.co.za/2014/04/when-is-mongodb-the-right-tool

[+] rdtsc|12 years ago|reply
I would stay with CouchDB. Its web interface if really good for inspecting and debugging what is inside the database. Really good for development.

Also it has a very nice HTTP interface so can talk to it straight from the Web via a proxy.

[+] tomsoft|12 years ago|reply
-> an ideal use case: capture and store unstructured data, typically tweets. Tweets structure is json based, quite complex with many field and substructure. It's incredibly to store and manipulate such data with Mongodb without even knowing all the details of the fields! It's also a good use case beacause mongodb is fairly good at adding data, pretty bad at deleting data.

same with some others document json oriented database (like elastic search), but mongodb is a good compromise in many area, the query language is easy to understand and powerful, the biggest issue being the diffculty to do complex computation and aggreagation: mapreduce help, aggreagation framework helps too, but in this area SQL is generally much faster for instance.

[+] wilsonfiifi|12 years ago|reply
Have a look at this: https://github.com/johnwilson/bytengine.

It could have been built with CouchDB however some features such as ad-hoc querying and partial document updates make MongoDB a more compelling choice (albeit prone to some scalability issues until mongodb version 2.8 hopefully lol!).

[+] id|12 years ago|reply
While MongoDB has many use cases, it's just perfect if you need to pass over JSON to your JavaScript. Or need JSON output for any other reason.
[+] endijs|12 years ago|reply
Cursor for aggregate, proper explain for aggregate, index intersection, $redact and other cool operators, Multi* in Geospatial, faster execution and, foundation for document-level locking which should be introduced in MongoDB 2.8. I must say I'm happy with this release.
[+] wilsonfiifi|12 years ago|reply
So does this mean document level locking has been implemented or just the foundation for its future implementation has been laid?

I can't hold my breath much longer! :-)

Edit: docs don't make any mention of it but then again they probably haven't updated them yet (fingers crossed!) http://docs.mongodb.org/manual/faq/concurrency/#what-type-of...

[+] leif|12 years ago|reply
It means the foundation has been laid. 2.6 included a lot of refactoring and rewriting of some core subsystems, with the apparent goal of eliminating technical debt so they can make more impactful changes in 2.8.

Don't asphyxiate, tokumx has document-level locking right now. http://github.com/Tokutek/mongo

[+] mdcallag|12 years ago|reply
I am a big fan of their focus on manageability for sharded databases. I am less of a fan of their db internals that might require you to use many more shards than a more performant engine. More details at http://smalldatum.blogspot.com
[+] owenversteeg|12 years ago|reply
The blog post was rather vague about 2.6's "better performance". Are there any concrete numbers?
[+] Oculus|12 years ago|reply
The main reason I use MongoDB on Node is the maturity of the Mongoose ORM - I've used Node-ORM 2, BookshelfJS, and SequelizeJS and none of them felt as mature as Mongoose.
[+] tbarbugli|12 years ago|reply
How is it possible to have aggregation cursors without crunching the complete data set in advance (aka map/reduce) and still have consistent and correct results?
[+] tedchs|12 years ago|reply
Will MongoDB still segfault under certain circumstances?
[+] ericingram|12 years ago|reply
I wonder, what does this mean for TokuMX