davidkell | 5 years ago | on: InfluxDB is betting on Rust and Apache Arrow for next-gen data store
davidkell's comments
davidkell | 5 years ago | on: InfluxDB is betting on Rust and Apache Arrow for next-gen data store
Nice job btw.
[0] https://hobbes.readthedocs.io/en/latest/language/types.html#...
davidkell | 5 years ago | on: InfluxDB is betting on Rust and Apache Arrow for next-gen data store
davidkell | 5 years ago | on: InfluxDB is betting on Rust and Apache Arrow for next-gen data store
davidkell | 5 years ago | on: How to GraphQL with Ruby, Rails, Active Record, and No N+1
In principle, I love the idea of Postgraphile but this is what turned us off.
davidkell | 5 years ago | on: How to Recalculate a Spreadsheet
At the same time, we’ve struggled to find decent reference material or libraries for building a reactive framework for X (for our use case, X = data science workflows). Most of these libraries seem to implement all the primitives from scratch.
I’d be interested to here other people’s thoughts on this!
davidkell | 5 years ago | on: How to GraphQL with Ruby, Rails, Active Record, and No N+1
There are definite downsides versus eg REST (notably performance, which becomes harder to reason about), but it’s an acceptable trade-off for us.
I’m also optimistic about the great tooling that is improving all the time - eg Hasura, Postgraphile, Graphene-SQLAlchemy all solve N+1 today.
davidkell | 5 years ago | on: Programs are a prison: Rethinking the building blocks of computing interfaces
- Open source data science/scientific computing ecosystems. Notably python, where all the libraries interop seamlessly via numpy/pandas/arrow and Jupyter is the visual coding platform. But also R/tidyverse and Julia.
- Modern “no-code” tools, where the visual coding is Notion/Coda/Bubble, interfaces via Zapier/Integromat/Autocode and data models in Airtable/Sheets. (Many of these tools use the word “block” as part of the UX)
And ofc, we take it for granted but the concept of a “file” is the ultimate building block for applications.
In my experience, commercial disincentives aside, the main trade off for this power/flexibility is the complexity. It is intimidating for new users, and hard to design well for because of the combinatorial explosion of interactions. Users need to be strongly motivated to get over this complexity hump - whereas most users, most of the time want a single happy path. Personally I don’t see this as a negative thing - you are essentially coding best practices into the tool.
As an aside, the instant feedback coding in python looks fantastic! Could be a fantastic extension eg for Jupyterlab or VSCode.
davidkell | 5 years ago | on: Why not use GraphQL?
For type safety, many backend frameworks generate OpenAPI specs automatically, and you can generate Typescript stubs based on this. Ditto for gRPC and gRPC web. We use these.
But I’ve not seen a replacement for the “application data graph” (but would be interested to learn about them!). The link from @jefflombardjr [0] explains it nicely - it is a great abstraction (in certain situations). Modelling your application data graph is like modelling a good database schema - when you get it right, the rest of the application follows naturally. It’s magical when it works, and I’d happily do it even if it’s just me working on the project.
And GraphQL has a great ecosystem, that is the advantage over niche tools. Example - last week we added auto-generated GraphQL types and relationships for Postgres JSON fields, with the help of [1]. No more malformed JSON breaking our app.
Note this is all in the context of a web app. Reading other comments, the tooling seems to be less developed on other platforms. And again, without decent tooling (especially for the server) I wouldn’t touch it.
[0] https://medium.com/@JeffLombardJr/when-and-why-to-use-graphq...
davidkell | 5 years ago | on: Why not use GraphQL?
- Auto-generated, type safe entities from source to client, including relationships = fewer bugs
- Ability to unify different backends (eg a database, warehouse, external APIs, cloud storage)
- The “application data graph” concept always brings huge clarity to the architecture design - you get to build your mental model in code
- GraphiQL is excellent
Having said this, most of the benefits come from the incredible tooling, eg for our stack Graphene, Graphene-X bridges, Typescript, Apollo. I would never consider writing a GraphQL server from scratch, and we’ve had bad experiences with instant database -> GraphQL solutions (not really keen on writing my application logic in SQL). It’s also not an either or - our current app uses REST for file uploads and stream large datasets to the client. And ofc, you can achieve many of these benefits with other solutions.
Wundergraph looks like a great addition to the ecosystem, it would already remove boilerplate from our app.
davidkell | 6 years ago | on: Ask HN: Who is hiring? (August 2019)
Gyana is a technology for doing data science, the way it was meant to be done.
Imagine the child of Notion, Excel and Tableau, with the capacity to analyse a billions rows on your laptop.
We are looking for engineering roles in frontend, backend and data science.
Tech stack is Electron, Typescript, React, Python, C++, Kubernetes.
Passion for data science and design goes a long way.
More information https://angel.co/company/gyana/jobs
I’m the CTO - feel free to email me at [email protected]
davidkell | 6 years ago | on: Ask HN: Who is hiring? (May 2019)
Gyana is real-life Sim City. ️
We are building a model of the physical world economy through geospatial IOT data. Our customers can access it via APIs, data dumps and our web app, which is like Bloomberg for the physical economy.
Tech stack: React, Typescript, Django, K8s, C++, q/kdb, pytorch.
Challenges: geospatial, petabytes, deep learning, data visualisation.
I'm the CTO! Feel free to PM or email if you want to know more ([email protected]).
davidkell | 7 years ago | on: Ask HN: Who is hiring? (November 2018)
Weather prediction got good when we started quantifying the physical world. We're doing the same for economics, through the power of machine-generated human big data. It's an interactive analytics tool that quantifies the performance of physically located assets.
We're a friendly and diverse bunch based in Moorgate, central London. We have a growing base of customers/evangelists, revenue and top VC funding.
Tech stack: React, Typescript, GraphQL, Django, Postgres, C/C++, q/kdb+, Kubernetes, GCP.
I'm the CTO - feel free to reach out to me at [email protected]