I've been using ArangoDB for a year now and I think they are definitely on to something cool.
Having stumbled upon some really complex data a few times now, I am increasingly appreciating how amazing it is to model your data any way you need, without having to deal with the complexity of running multiple data stores.
Cool to see that I apparently didn't give up any performance to get the flexibility. :)
I'd love to see them push the geospatial capabilities a little further, but they are already pretty decent.
This does not explain why arangodb is faster. Stating that «Native multi-model» is a killer feature would be more interesting if it was explained what it means outside of «arangodb=graph+k/v+document stores». What is the difference between a graph vertex without edge and a document at the storage level? arangodb is faster than wiredtiger? suspicious.
Hi amirouche, I'm Frank, CTO of ArangoDB. Claudius benchmark looked at queries occurring a typical social network project. The tests shows that wiredtiger is indeed a bit faster for reads and writes. The neighbors of neighbors is typically a question you would ask a graph database, not a a document store. Therefore, you would set-up two databases and ask MongoDB the document questions and Neo4J the graph questions. If you use a native multi-model approach, you only need to setup, maintain one database. The response times for example for reads and shortest paths are comparable to the specialized solutions.
For the technical difference at storage level: graphs and documents model are in my opinion a perfect match, because a vertex (and an edge for that matter) can be stored as ordinary documents. This allows you to use any document query you have in a document (give me all users, which live in Denver) and start graph queries from the vertices found in this manner (give me their 1 and 2 level friends).
Some people build "graph databases" on top of storage backends that are ill suited for such workloads. e.g you can build a "graph database" (or K/V store) on top of MySQL, but the performance is terrible - http://java.dzone.com/articles/mysql-vs-neo4j-large-scale
a "native graph database" is one that is actually designed for the task.
Basically for me, it has two aspects: first the storage engine is designed to handle all models natively. second you have a common query language which is supported by the database storage engine.
There are different approaches, which are used in other products and which can also work well. For example, you can restrict the database engine to a pure key/value store and add different personalities to it. Or you have a client which implements a common query language for different products.
I think the idea with "native multi-model" is that Arango was explicitly designed to do k/v, documents and graphs rather than it being something that is bolted on afterwards.
OrientDB fails to deliver on its promises. It has a load of features but they are poorly thought out and/or broken.
ArangoDB is OrientDB done right, but it's a lot younger.
If you're considering using either, you owe it to yourself to investigate whether postgres's Common Table Expressions [0] can do what you want instead. If you can stick with something more mature like postgres, then you'll be saving yourself a lot of pain.
Hi, this Frank from ArangoDB again. We've not added absolute times because it strongly depends of your machines, network. For the setup we have used, the baseline (ArangoDB) is as follows: shortest path 0.5 sec, neighbors 0.15 sec, single read 26 sec, single write 27 sec, aggregations 1.4 sec, memory 12.5 GByte. Therefore a 16 GByte machine should have been enough. But we did not know beforehand, therefore we selected the biggest machine from GCE available to us (thanks to Google for giving us credits to use the machines for free).
I agree never trust a benchmark. It really all depends on your use case. If you have ideas for improvements, we would love to hear about them. Also if you have any idea how to improve the mongodb or neo4j queries, please check-out github and let us know.
I don't understand the % either. They state it is graph of _throughput_, with higher percentages from baseline being less throughput? If I hadn't read the backwards description I would've concluded that their DB is really slow on most fronts.
It's a database that stores a "graph" of vertices that are connected by edges. If you were say, creating the next LinkedIn and you wanted to find the shortest path between two users based on their connections, a graph database would be a good choice.
Let's imagine you want to see how Fred is connected to Steve, their network looks like this:
A "traditional" database like MySQL or Mongo makes this kind of query prohibitively expensive and complicated, as it must perform a new join for every connected person in the user's graph.
Graph databases come into their own because they are designed specifically for efficient traversal of these connecting edges. They typically do this by storing "pointers" on each vertex to its connected edges, so while a normal RDBMS requires something like a hash table lookup to resolve a join, a graph database can simply "jump" to the relevant record via a pointer. This means that things like Dijkstra's algorithm [0] can be implemented efficiently.
However, "traditional" graph databases like Neo4j require everything to be structured in terms of vertices and edges. This is often quite inconvenient, so Multi Model databases like ArangoDB integrate this graph approach with a document store as well, the idea being that if you can keep everything in the same db your app gets a lot simpler, you regain things like ACIDity that you'd normally lose by using 2 separate dbs, and performance should be a lot better too.
That would be cool. I'm not sure it would change much though. I know someone working with search data who recently tried out Neo4j with a test data set of 500,000,000 nodes and apparently was really disappointed with the results.
I'm not sure that graph data (generally) is particularly amenable to being spread across multiple nodes. My understanding is that ArangoDB has implemented some clustering based on Googles Pregel Framework, so I suspect it might fare a bit better in my friends test... but in spite of my urging I don't know that he has had time to recreate the test with Arango. I'm keeping my fingers crossed.
I don't know if any database is fun to deal with at that size. My experience with Arango has been an unremarkable amount of remarkably complex data, so I would also be interested to see the results with something huge.
[+] [-] dexterchief|10 years ago|reply
Having stumbled upon some really complex data a few times now, I am increasingly appreciating how amazing it is to model your data any way you need, without having to deal with the complexity of running multiple data stores.
Cool to see that I apparently didn't give up any performance to get the flexibility. :)
I'd love to see them push the geospatial capabilities a little further, but they are already pretty decent.
[+] [-] amirouche|10 years ago|reply
[+] [-] fceller|10 years ago|reply
For the technical difference at storage level: graphs and documents model are in my opinion a perfect match, because a vertex (and an edge for that matter) can be stored as ordinary documents. This allows you to use any document query you have in a document (give me all users, which live in Denver) and start graph queries from the vertices found in this manner (give me their 1 and 2 level friends).
[+] [-] phpnode|10 years ago|reply
Nothing, most multi-model dbs store vertices (and edges) as documents
[+] [-] amelius|10 years ago|reply
[+] [-] phpnode|10 years ago|reply
a "native graph database" is one that is actually designed for the task.
[+] [-] fceller|10 years ago|reply
There are different approaches, which are used in other products and which can also work well. For example, you can restrict the database engine to a pure key/value store and add different personalities to it. Or you have a client which implements a common query language for different products.
[+] [-] dexterchief|10 years ago|reply
[+] [-] codewithcheese|10 years ago|reply
[+] [-] phpnode|10 years ago|reply
ArangoDB is OrientDB done right, but it's a lot younger.
If you're considering using either, you owe it to yourself to investigate whether postgres's Common Table Expressions [0] can do what you want instead. If you can stick with something more mature like postgres, then you'll be saving yourself a lot of pain.
[0] http://www.postgresql.org/docs/9.1/static/queries-with.html
[+] [-] maxdemarzi|10 years ago|reply
>>The uncompressed JSON data for the vertices need around 600 MB and the uncompressed JSON data for the edges requires around 1.832 GB.
So why use a 60GB RAM machine for so little data?
Can we get some raw numbers instead of %?
[+] [-] fceller|10 years ago|reply
I agree never trust a benchmark. It really all depends on your use case. If you have ideas for improvements, we would love to hear about them. Also if you have any idea how to improve the mongodb or neo4j queries, please check-out github and let us know.
[+] [-] nosideeffects|10 years ago|reply
[+] [-] crudbug|10 years ago|reply
[+] [-] dmarcelino|10 years ago|reply
[+] [-] jsteemann|10 years ago|reply
[+] [-] rmrfrmrf|10 years ago|reply
[+] [-] phpnode|10 years ago|reply
Let's imagine you want to see how Fred is connected to Steve, their network looks like this:
Diagram: http://yuml.me/6ff3074eA "traditional" database like MySQL or Mongo makes this kind of query prohibitively expensive and complicated, as it must perform a new join for every connected person in the user's graph.
Graph databases come into their own because they are designed specifically for efficient traversal of these connecting edges. They typically do this by storing "pointers" on each vertex to its connected edges, so while a normal RDBMS requires something like a hash table lookup to resolve a join, a graph database can simply "jump" to the relevant record via a pointer. This means that things like Dijkstra's algorithm [0] can be implemented efficiently.
However, "traditional" graph databases like Neo4j require everything to be structured in terms of vertices and edges. This is often quite inconvenient, so Multi Model databases like ArangoDB integrate this graph approach with a document store as well, the idea being that if you can keep everything in the same db your app gets a lot simpler, you regain things like ACIDity that you'd normally lose by using 2 separate dbs, and performance should be a lot better too.
[0] http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
[+] [-] mercnet|10 years ago|reply
Check out http://neo4j.com/ and live examples using it http://gist.neo4j.org/
[+] [-] Xophmeister|10 years ago|reply
[+] [-] arthursilva|10 years ago|reply
[+] [-] dexterchief|10 years ago|reply
I'm not sure that graph data (generally) is particularly amenable to being spread across multiple nodes. My understanding is that ArangoDB has implemented some clustering based on Googles Pregel Framework, so I suspect it might fare a bit better in my friends test... but in spite of my urging I don't know that he has had time to recreate the test with Arango. I'm keeping my fingers crossed.
I don't know if any database is fun to deal with at that size. My experience with Arango has been an unremarkable amount of remarkably complex data, so I would also be interested to see the results with something huge.