(no title)
guodong | 3 years ago
For scalability, we can scale to several hundred GBs, and we routinely test on LDBC up to 300GBs. Our goal is to support efficiently querying over data at TB scale.
Right now, we only support CSV import. We are currently working on the integration of arrow, and aim to support more data formats through arrow. Hopefully that will bring us to support parquet, json, etc.
Built-in graph algorithms are coming along, but step by step. We are focusing on shortest path quries for now.
As always, any suggestions and discussions on these are welcome.
No comments yet.