Using the new working set analyser is going to make figuring out how much memory you need significantly easier. Giving MongoDB enough memory for your working set is the easiest way to get performance but it was quite difficult to figure it out: you had to know both data and index sizes across your most common usage patterns.
Do you know of any resources that give some info on this topic in general? I've been looking but I haven't found anything that's sufficiently plainly-spoken that I fully grasp the contents.
Yeah, your right, MongoHQ hasn't brought anything positive to web development and there's no way they're going to make a business supporting their db. It definitely needs to be put in it's place and we should all take the time to talk about just how cool we are for predicting it's downfall. If only the whole world had enough insight to just stop before they tried to do something awesome. After all, someone may have tried it before.
While MongoDB was certainly built upon the shoulders of the giants that went before, so has been every product ever. To call it a reinvention of the Pick Database is flattering but highly inaccurate.
Perhaps more accurate is learn from the past, improve the future.
My understanding of Pick databases was that they were typeless. Mongo isn't. There's a difference between being typeless and schemaless, and Mongo is a good mix for my usecases.
Reminds me of Bill Gates recent IAMA where he talked about how they were close but didn't make it to market with a similar product, e.g. he seemed to be saying "MSFT was there first...!" I guess not, if this has been around since 1980 :)
http://www.reddit.com/r/IAmA/comments/18bhme/im_bill_gates_c...
I was hoping for collection-level locking to be a part of the 2.4 release. I didn't see any mention of it in the release notes. Last I heard they were going to implement collection-level locking and then begin work on document-level locking. I'm still hoping document-level locking isn't too far off.
Collection level locking isn't in 2.4. Working on more granular locking for 2.6/2.8. May skip collection level entirely for something more granular. More details can be found in the collection level locking ticket https://jira.mongodb.org/browse/SERVER-1240
They are not fast, they are "normal speed" now. They were slow before. That was one of the problems which made me question the competency of the Mongo team.
The MongoDB staff had said they're mostly adding text searching to satisfy a repeat request that they've heard over and over - but they've said that anyone really wanting to do text searching at scale should use something like Sphinx, Solr or ElasticSearch.
Keep in mind that it is both new and experimental.
If you are comparing them based on functionality and performance you are missing the point.
It is my opinion that this easily bests external implementations by reducing moving parts and removing the asynchronous nature of the updates to search indexes.
The text index is updated atomically and in real time. Best of all: it's not another part of your stack that needs to be setup and maintained.
I was hoping the security enhancements would include SSL certificate validation. Anyone know why they don't do that, or how a user should approach that limitation?
We'll be moving away from MongoDB because it doesn't support certificate validation. What is the point of SSL connections if you don't validate the certificate? It seems that you get all the drawbacks of encryption (overhead, throughput) with none of the benefits (security).
I can't actually find any details about how this works in practice. Are multiple maps and reduces in a map-reduce executed simultaneously within a single mongod process? If so, how many? Is it based on the number of cores in the server?
Edit: not asking you specifically, just generally curious.
Yeah, this is a big deal. It enables very interesting opportunities to create a map reduce engine that isn't awful or which requires Java.
People use Hadoop because they have to, not because it is great. Isn't it time for a better alternative now that the toothpaste is out of the tube re: map/reduce?
Java doesn't even have hash literal syntax! Why would you ever want to query document oriented data with it as your language of expression?
[+] [-] dmytton|13 years ago|reply
[+] [-] diminoten|13 years ago|reply
[+] [-] badgar|13 years ago|reply
This is called capacity planning. If you can't estimate what resources you use, I wouldn't expect great success scaling.
[+] [-] gregjor|13 years ago|reply
[+] [-] newishuser|13 years ago|reply
[+] [-] spf13|13 years ago|reply
Perhaps more accurate is learn from the past, improve the future.
[+] [-] julien_c|13 years ago|reply
[+] [-] eddieroger|13 years ago|reply
[+] [-] redwood|13 years ago|reply
[+] [-] davidkellis|13 years ago|reply
[+] [-] spf13|13 years ago|reply
[+] [-] c-oreills|13 years ago|reply
[+] [-] Lionga|13 years ago|reply
Best feature for me
[+] [-] friendly_chap|13 years ago|reply
Best feature for me too anyway.
[+] [-] c-oreills|13 years ago|reply
That's going to save a lot of accidental "shit-the-whole-db-is-locked" pain.
[+] [-] dkhenry|13 years ago|reply
[+] [-] ecaron|13 years ago|reply
[+] [-] tbrock|13 years ago|reply
If you are comparing them based on functionality and performance you are missing the point.
It is my opinion that this easily bests external implementations by reducing moving parts and removing the asynchronous nature of the updates to search indexes.
The text index is updated atomically and in real time. Best of all: it's not another part of your stack that needs to be setup and maintained.
[+] [-] mumrah|13 years ago|reply
[+] [-] leothekim|13 years ago|reply
[1] http://docs.mongodb.org/manual/release-notes/2.4-upgrade/#up...
[+] [-] c-oreills|13 years ago|reply
[+] [-] derricki|13 years ago|reply
[+] [-] matthewlucid|13 years ago|reply
I'd love to see a solution.
[+] [-] milkie|13 years ago|reply
[+] [-] jonesjim|13 years ago|reply
[+] [-] apendleton|13 years ago|reply
Edit: not asking you specifically, just generally curious.
[+] [-] vailripper|13 years ago|reply
[+] [-] ranman|13 years ago|reply
[+] [-] tbrock|13 years ago|reply
People use Hadoop because they have to, not because it is great. Isn't it time for a better alternative now that the toothpaste is out of the tube re: map/reduce?
Java doesn't even have hash literal syntax! Why would you ever want to query document oriented data with it as your language of expression?
[+] [-] leif|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] dschiptsov|13 years ago|reply
[+] [-] L0j1k|13 years ago|reply