Very cool and lightweight way to talk to PlanetScale but for now I'll stick with Prisma. Prisma is much heavier (engine weighs in at ~50MB) and that can be a non-starter for serverless in some cases but it works for me on AWS Lambda.
The nice thing about PlanetScale is you get nearly unlimited connections (soft limit of like 250K IIRC) so making 1 connection per active lambda isn't a problem at all.
I've been using PlanetScale since shortly after they went GA and I've been very happy so far. Cheaper than Aurora Serverless, less hassle, and the branching feature is super cool. Zero-downtime deploys, with rollback support, feel magical.
They say in the post to use environment variables when deploying. I think they’re talking about server-side JS, not JS running in an end-user’s browser.
This is great! Potentially the missing piece in the world of "serverless": a serverless SQL database accessible over HTTP with no minimum price. I saw AWS Aurora has a Data API but currently that does have a minimum monthly cost. Fauna is possibly the closest, but that appears to involve translating SQL to its FQL. How stable is the beta?
It's 90% there... but not allowing foreign keys [1] (=cascading delete) means that this solution becomes equal with DynamoDB (Serverless + Scale to 0 as well) because products will either be built for it from day one or never support it with a full fledged DB server instance that is 100% up in mind. Unfortunately, didn't find a lot in-between.
I mean I'm using PlanetScale with Prisma ORM on AWS Lambda right now without issue. Unlike many DB providers, PlanetScale supports nearly unlimited connections. I think I saw in their docs they have a soft-limit of something like 250K connections.
I used Aurora Data API before moving off Aurora Serverless (insane pricing) to Prisma and PlanetScale, I don't think I'd go back to the HTTP API as Prisma works very well and I enjoy using it. One downside to Prisma is the DB "engine" is a arch-specific (obviously) binary that is pretty hefty. I want to say ~50MB. That can be a killer in serverless but I was able to work around it without much issue. If I ever wanted to dive into the world of Lambda layers I could probably move it into it's own later (however that still counts towards your max lambda size).
Our beta for this is pretty stable. The only real instabilities are around the underlying APIs we use, which is part of why we're not ready to document it just yet.
But the underlying tech is exactly the same as we use for handling traditional MySQL connections, so there isn't anything to fear.
This is huge for me. I went down the path of trying to build an entire API on CF Workers and the biggest stumbling block was no easy access to external relational databases due to a lack of v8 compatible connectors. This was before the D1 announcement of course.
To my knowledge you can use planetscale’s read replicas in different regions which the user is then routed to the closest one? I was going to use fly.io until I realized PlanetScale does this already - or do you mean writes as well?
Please consider using a custom string interpolation function to sanitize SQL, not an additional array of parameters. See styled components for inspiration.
Great question! Author of a lot of the infrastructure surrounding this here.
I plan on following up with a more technical deep dive on some of these aspects, but it's quite a bit hard to do a 1:1 comparison.
While, obviously, HTTP also uses TCP, I'm going to assume you're asking more about the binary MySQL protocol vs HTTP.
On the surface, yes, HTTP is going to have more overhead with headers and the other aspects that come with HTTP. Also, in this case, the payload is JSON, both on the request and response, which is going to be slightly more bulky than the payloads over the MySQL protocol.
So where things get interesting is the real world performance implications. Networks and CPUs are really fast. HTTP has been extremely scrutinized and JSON as well. While they are admittedly not the most efficient standards, they are used so heavily and parsers are heavily optimized for these protocols.
Mixing in modern TLS 1.3 with modern ciphers, which is something you're very unlikely to get with a traditional MySQL client, we can achieve a much faster first connection time. Pairing with modern ciphers that are demanded for TLS 1.3, and the transport itself can be significantly faster than a slower cipher. This isn't the best comparison since typically you're using MySQL without TLS, but when talking to a service provider like us, we require it for obvious reasons.
Next, with HTTP, we get to leverage compression. In our case, we can use browser level native compression with gzip, or if using something server side, we support compressions like snappy. Combine something like gzip or snappy with HTTP/2 now, and given that the bulk of a query result is typically the result itself, and not the protocol, compression can make up a decent amount of the difference. Again, I'm being hand wavy since every query pattern and results are going to be wildly different based on your data, so it's not anything fair to say "this is x% more or less efficient."
And lastly, with HTTP, especially HTTP/2, we can multiplex connections. So similar to the gains with TLS 1.3, you can do many many concurrent database sessions across 1 actual TCP connection with HTTP/2. And similarly to TLS 1.3, the cost here is in connection timings, and management of a connection pool. You don't need a pool of connections on the client, so your startup times can be reduced.
As a stretch goal, HTTP/3 (with QUIC) is in the crosshairs, which should eliminate some more transport cost since it uses UDP rather than TCP. My hunch is all of this combined together, an optimized driver leveraging HTTP/3 might beat out a MySQL client overall. Time will tell though!
So there's no simple answer here, everything has tradeoffs. :)
joshstrange|3 years ago
The nice thing about PlanetScale is you get nearly unlimited connections (soft limit of like 250K IIRC) so making 1 connection per active lambda isn't a problem at all.
I've been using PlanetScale since shortly after they went GA and I've been very happy so far. Cheaper than Aurora Serverless, less hassle, and the branching feature is super cool. Zero-downtime deploys, with rollback support, feel magical.
mattrobenolt|3 years ago
This specifically is targeting environments where a MySQL client isn't able to run.
avinassh|3 years ago
How does this work?
mandeepj|3 years ago
import { connect } from '@planetscale/database'
const config = { host: '<host>', username: '<user>', password: '<password>' }
https://github.com/planetscale/database-js
thundergolfer|3 years ago
jakear|3 years ago
The idea is these runtimes typically only allow HTTP outbound requests, not whatever protocol is typically used to connect to a planetscale db.
mattrobenolt|3 years ago
gsanderson|3 years ago
yonixw|3 years ago
[1] https://planetscale.com/docs/learn/operating-without-foreign...
joshstrange|3 years ago
I used Aurora Data API before moving off Aurora Serverless (insane pricing) to Prisma and PlanetScale, I don't think I'd go back to the HTTP API as Prisma works very well and I enjoy using it. One downside to Prisma is the DB "engine" is a arch-specific (obviously) binary that is pretty hefty. I want to say ~50MB. That can be a killer in serverless but I was able to work around it without much issue. If I ever wanted to dive into the world of Lambda layers I could probably move it into it's own later (however that still counts towards your max lambda size).
mattrobenolt|3 years ago
But the underlying tech is exactly the same as we use for handling traditional MySQL connections, so there isn't anything to fear.
k__|3 years ago
alexcroox|3 years ago
higgins|3 years ago
if so and you used it, what was you experience running fauna in CF workers?
_ben_|3 years ago
asadawadia|3 years ago
rsweeney21|3 years ago
It seems like you lose most of the benefit of your code running at the edge if your database is still in an AWS region.
athammer|3 years ago
https://planetscale.com/docs/concepts/read-only-regions
mattrobenolt|3 years ago
jackblemming|3 years ago
asadawadia|3 years ago
astrobe_|3 years ago
mattrobenolt|3 years ago
brundolf|3 years ago
mattrobenolt|3 years ago
I plan on following up with a more technical deep dive on some of these aspects, but it's quite a bit hard to do a 1:1 comparison.
While, obviously, HTTP also uses TCP, I'm going to assume you're asking more about the binary MySQL protocol vs HTTP.
On the surface, yes, HTTP is going to have more overhead with headers and the other aspects that come with HTTP. Also, in this case, the payload is JSON, both on the request and response, which is going to be slightly more bulky than the payloads over the MySQL protocol.
So where things get interesting is the real world performance implications. Networks and CPUs are really fast. HTTP has been extremely scrutinized and JSON as well. While they are admittedly not the most efficient standards, they are used so heavily and parsers are heavily optimized for these protocols.
Mixing in modern TLS 1.3 with modern ciphers, which is something you're very unlikely to get with a traditional MySQL client, we can achieve a much faster first connection time. Pairing with modern ciphers that are demanded for TLS 1.3, and the transport itself can be significantly faster than a slower cipher. This isn't the best comparison since typically you're using MySQL without TLS, but when talking to a service provider like us, we require it for obvious reasons.
Next, with HTTP, we get to leverage compression. In our case, we can use browser level native compression with gzip, or if using something server side, we support compressions like snappy. Combine something like gzip or snappy with HTTP/2 now, and given that the bulk of a query result is typically the result itself, and not the protocol, compression can make up a decent amount of the difference. Again, I'm being hand wavy since every query pattern and results are going to be wildly different based on your data, so it's not anything fair to say "this is x% more or less efficient."
And lastly, with HTTP, especially HTTP/2, we can multiplex connections. So similar to the gains with TLS 1.3, you can do many many concurrent database sessions across 1 actual TCP connection with HTTP/2. And similarly to TLS 1.3, the cost here is in connection timings, and management of a connection pool. You don't need a pool of connections on the client, so your startup times can be reduced.
As a stretch goal, HTTP/3 (with QUIC) is in the crosshairs, which should eliminate some more transport cost since it uses UDP rather than TCP. My hunch is all of this combined together, an optimized driver leveraging HTTP/3 might beat out a MySQL client overall. Time will tell though!
So there's no simple answer here, everything has tradeoffs. :)
tdy721|3 years ago
mattrobenolt|3 years ago
So we work there just fine.
unknown|3 years ago
[deleted]
theobr|3 years ago
tmikaeld|3 years ago
mandeepj|3 years ago
It's open-source. What's your concern now?
https://github.com/planetscale/database-js