Do you have documentation how to get started with Cayley? I was reading this -> https://cayley.io/1-getting-started/ A little bit too short for my taste. :)
We have Elasticsearch as a generic document search engine, each document has a non-trivial amount of properties (let's say 50 or so). It's incredibly performant for all sort of searches, the details of each of which this solution wasn't specifically designed for, hence me calling it a generic search engine. Every time I contemplate of bringing graph relationships, that exist between these documents, into the mix, I get stuck. Elasticsearch doesn't quite do graph (natively), but the graph databases I tried don't do properties too well (OrientDB, Neo4J). I'm not talking about one or two properties, but multiple properties across multiple hops in the graph that I envision querying for. Let alone full text searches. I emailed back and forth with the helpful folks at Orient, but it always came down to optimising for specific queries, gone the "generic". Is anyone solving that problem? Cayley?
Dgraph's retrieval is pretty fast, so looking up properties is trivial. It also supports indexing various data types: supports full-text search, term matching and regexps on strings, inequality and sorting on ints, floats, dates etc.
https://docs.dgraph.io/v0.7.4/query-language/#functions
One of our users is on the path to switch to Dgraph from Elastic Search. So, I'd say try Dgraph out and see if that'd help your use-case. I think it should. And if Dgraph is missing something that you need from ES, feel free to file an issue. Happy to address it.
Sounds like something ArangoDB could be a good solution for. Full disclosure I'm from ArangoDB team and happy to help. If you like just drop me a line to jan.stuecke (at) arangodb.com
What exactly is this? The GitHub page speaks of different backends, and those appear to just be databases or key-value stores in themselves (e.g, Postgres and Bolt).
Is Cayley basically a query rewritter, that is it has some tables in the backend and when queried, Cayley then goes to the "real" (for lack of a better word) database? Cayley's query language might be more full featured, but it isn't a storage mechanism in itself?
There are two things from that:
1. There is no way for Cayley to take the graph structure of the data into account when laying it out on disk or when executing the query. Is this the long-term decision, or is this just a stop-gap until a storage mechanism can be done?
2. This would seem to imply that the abstraction layer from Cayley to the backend storage would be relatively slim. How difficult is it to add another storage driver for another SQL database or for one with a custom query language?
Another thing I noticed:
> query -- films starring X and Y -- takes ~150ms
Even on two year old hardware that seems dog slow - less than 7 queries a second - for a very simple query.
Cayley's graph data layout is most similar to a Hexastore-style [1] triple store, though IIRC it doesn't do the full six-way index that the original Hexastore paper describes. The Redis page on secondary indexing [2] has a great quick intro to what this actually entails (search the page for Hexastore).
As you might guess from the Redis link, this style of graph lends itself well to KV stores, so the answer to your question #1 might be that it's a long-term decision, but the style of graph is really designed for a KV store anyway. But I haven't discussed this at all with the Cayley devs so I can't actually speak for them.
I'm using it with the BoltDB backend and have been pleased with the performance overall. I haven't looked at the backends for more complex databases like Postgres in detail, but it does appear that the backend interface has potential for predicate pushdown as well. The repository's graph directory [3] contains the various backends if you want to check it out. Overall it doesn't look very difficult to add another backend type, but I haven't tried it yet. Looking at the existing SQL backend, it appears to already support MySQL, PostgreSQL, and CockroachDB (but I've tried none of these with Cayley).
Anyone use Cayley in prod? An old job used Neo4j, and the graph concept was great for specific use cases. As a lightweight graph store, Cayley was really exciting when it came out, but I haven't had a need for it since I left that job. It strikes me as really well made, and I'd love to hear any war stories.
Tried to use it in production a couple of years ago hosting a mirror copy of Freebase with mixed results:
- There were a couple of issue loading the data that we fixed and contributed back the patch
- Loading the data was really slow, and it got slower every time a new entry was added (Loading the full freebase dump required 1 week on a very beefy machine with SSD. Used LevelDB)
- Then the queries were relatively slow. Without going too much into details, we were using the data to analyze texts and extract entities, and the relationship between them, and even parallelizing the queries, they were relatively slow (depending on complexity between 0.1 and 1 sec on average). We solved the issue implementing a robust caching layer in front of it and carefully planning the queries.
- In general, it was stable and performant enough for a backend service. But we were pushing really the envelope of what it could do.
All in all, I would say that I was happy with it. In comparison, I tried a year earlier to use Neo4J in a similar role and I give up after 2 weeks because I wasn't even able to get it loading part of the dataset without crashing on a similar hardware.
I can't understand how to use the query language. It all seems so magical!
I tried building something with Cayley once but couldn't fetch all the data I wanted in a single query, or didn't know how to, then got frustrated and deleted everything.
Feel free to ask more questions through whatever channel you like; we need better docs for sure, but if you're lost we have a really friendly community that's happy to help.
Which of the three query languages are you having trouble with? All of them? MQL has been around a long long time (2006). Gizmo is new but based on & very similar to Gremlin (2009). GraphQL is the newest (2015). Did you try them all? Or is one in particular rough?
The thing badwolf brings to the table (and respect to the author -- super nice fellow) is adding metadata (namely, a timestamp) to the links.
The topic of 'reification' found throughout our recent discussion is how we can generally add metadata to links, thereby making it a lot easier to fit the two models together.
I used it in the past with the Freebase dump and was amazingly stable (also two years ago). I was using LevelDB on a pretty beefy machine. The big issue was the time for loading the data. At the time the MongoDB backend wasn't good enough. I posted my experience in a response in the thread.
[+] [-] barakm|9 years ago|reply
We've got a lot of new features on master, (GraphQL support, Gephi interfaces, Recursive iterators, etc) and are cutting a release next week.
Active work in the coming releases on tightening down the indexing and really bringing it into prod.
EDIT: Feel free to join the new Slack or the Discourse mailing list/discussion board!
[+] [-] StreamBright|9 years ago|reply
[+] [-] scalatohaskell|9 years ago|reply
[+] [-] jjirsa|9 years ago|reply
[+] [-] chaostheory|9 years ago|reply
[+] [-] scrollaway|9 years ago|reply
Cool to see cayley on HN again :) Pretty excited to use it some time.
[+] [-] sverhagen|9 years ago|reply
[+] [-] mrjn|9 years ago|reply
Dgraph's retrieval is pretty fast, so looking up properties is trivial. It also supports indexing various data types: supports full-text search, term matching and regexps on strings, inequality and sorting on ints, floats, dates etc. https://docs.dgraph.io/v0.7.4/query-language/#functions
One of our users is on the path to switch to Dgraph from Elastic Search. So, I'd say try Dgraph out and see if that'd help your use-case. I think it should. And if Dgraph is missing something that you need from ES, feel free to file an issue. Happy to address it.
[+] [-] reactor|9 years ago|reply
[+] [-] janemanos|9 years ago|reply
[+] [-] nickstefan12|9 years ago|reply
Specifically, you could use postgresql for the edge traversing and its jsonb column to store searchable attributes.
[+] [-] solipsism|9 years ago|reply
[+] [-] jnordwick|9 years ago|reply
Is Cayley basically a query rewritter, that is it has some tables in the backend and when queried, Cayley then goes to the "real" (for lack of a better word) database? Cayley's query language might be more full featured, but it isn't a storage mechanism in itself?
There are two things from that:
1. There is no way for Cayley to take the graph structure of the data into account when laying it out on disk or when executing the query. Is this the long-term decision, or is this just a stop-gap until a storage mechanism can be done?
2. This would seem to imply that the abstraction layer from Cayley to the backend storage would be relatively slim. How difficult is it to add another storage driver for another SQL database or for one with a custom query language?
Another thing I noticed:
> query -- films starring X and Y -- takes ~150ms
Even on two year old hardware that seems dog slow - less than 7 queries a second - for a very simple query.
[+] [-] dimfeld|9 years ago|reply
As you might guess from the Redis link, this style of graph lends itself well to KV stores, so the answer to your question #1 might be that it's a long-term decision, but the style of graph is really designed for a KV store anyway. But I haven't discussed this at all with the Cayley devs so I can't actually speak for them.
I'm using it with the BoltDB backend and have been pleased with the performance overall. I haven't looked at the backends for more complex databases like Postgres in detail, but it does appear that the backend interface has potential for predicate pushdown as well. The repository's graph directory [3] contains the various backends if you want to check it out. Overall it doesn't look very difficult to add another backend type, but I haven't tried it yet. Looking at the existing SQL backend, it appears to already support MySQL, PostgreSQL, and CockroachDB (but I've tried none of these with Cayley).
[1] http://www.vldb.org/pvldb/1/1453965.pdf [2] https://redis.io/topics/indexes [3] https://github.com/cayleygraph/cayley/tree/master/graph
[+] [-] perfmode|9 years ago|reply
[+] [-] ceyhunkazel|9 years ago|reply
[+] [-] dyu-|9 years ago|reply
[+] [-] zmanian|9 years ago|reply
1. Use Cayley as a library 2. Put metadata in separate nodes.
[+] [-] dimfeld|9 years ago|reply
[+] [-] badtuple|9 years ago|reply
[+] [-] LukaAl|9 years ago|reply
- There were a couple of issue loading the data that we fixed and contributed back the patch
- Loading the data was really slow, and it got slower every time a new entry was added (Loading the full freebase dump required 1 week on a very beefy machine with SSD. Used LevelDB)
- Then the queries were relatively slow. Without going too much into details, we were using the data to analyze texts and extract entities, and the relationship between them, and even parallelizing the queries, they were relatively slow (depending on complexity between 0.1 and 1 sec on average). We solved the issue implementing a robust caching layer in front of it and carefully planning the queries.
- In general, it was stable and performant enough for a backend service. But we were pushing really the envelope of what it could do.
All in all, I would say that I was happy with it. In comparison, I tried a year earlier to use Neo4J in a similar role and I give up after 2 weeks because I wasn't even able to get it loading part of the dataset without crashing on a similar hardware.
[+] [-] fiatjaf|9 years ago|reply
I tried building something with Cayley once but couldn't fetch all the data I wanted in a single query, or didn't know how to, then got frustrated and deleted everything.
[+] [-] barakm|9 years ago|reply
[+] [-] rektide|9 years ago|reply
[+] [-] wut42|9 years ago|reply
https://github.com/google/badwolf
[+] [-] barakm|9 years ago|reply
The topic of 'reification' found throughout our recent discussion is how we can generally add metadata to links, thereby making it a lot easier to fit the two models together.
[+] [-] guillem_lefait|9 years ago|reply
[+] [-] barakm|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] jraedisch|9 years ago|reply
[+] [-] d0vs|9 years ago|reply
[+] [-] maxdemarzi|9 years ago|reply
They are better suited for Graph Databases because the queries tend to be many joins traversing paths both deep and wide.
[+] [-] schemathings|9 years ago|reply
[+] [-] frik|9 years ago|reply
[+] [-] LukaAl|9 years ago|reply
[+] [-] frik|9 years ago|reply