top | item 45880665

(no title)

mariopt | 3 months ago

Been using CF Workers with JavaScript and I absolutely love it.

What is performance overhead when comparing rust against wasm?

Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.

discuss

order

laktek|3 months ago

> Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.

Supabase Edge Functions runs on the same V8 isolate primitive as Cloudflare Workers and is fully open-source (https://github.com/supabase/edge-runtime). We use the Deno runtime, which supports Node built-in APIs, npm packages, and WebAssembly (WASM) modules. (disclaimer: I'm the lead for Supabase Edge Functions)

mariopt|3 months ago

It would be interesting if Supabase allows me to use that runtime without forcing me to use supabase, being a separated product on its own.

Several years ago, I used MeteorJs, it uses mongo and it is somehow comparable to Supabase. The main issue that burned me and several projects was that It was hard/even impossible to bring different libraries, it was a full stack solution that did not evolved well, it was great for prototyping until it became unsustainable and even hard to on board new devs due to “separating of concerns” mostly due to the big learning curve of one big framework.

Having learn for this, I only build apps where I can bring whatever library I want. I need tool/library/frameworks to as agnostic as possible.

The thing I love about CloudFlare workers is that you are not force to use any other CF service, I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.

About the runtimes: Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.

kevincox|3 months ago

It surely depends on your use case. Testing my Ricochet Robots solver (https://ricochetrobots.kevincox.ca/) which is pure computation with effectively no IO the speed is basically indistinguishable. Some runs the WASM is faster sometimes the native is faster. On average the native is definitely faster but it is surprisingly within the noise.

Last time I compared (about 8 years ago) WASM was closer to double the runtime. So things have definitely improved. (I had to check a handful of times that I was compiling with the correct optimizations in both cases.)

pmarreck|3 months ago

The stats I've seen show a 10-20% loss in speed relative to natively-compiled, which is effectively noise for all but the most critical paths.

It may get even closer with WASM3, released 2 months ago, since it has things like 64 bit address support, more flexible vector instructions, typed references (which remove runtime safety checks), basic GC, etc. https://webassembly.org/news/2025-09-17-wasm-3.0/

kentonv|3 months ago

The Cloudflare Workers runtime is open source: https://github.com/cloudflare/workerd

People can and do use this to run Workers on hosting providers other than Cloudflare.

yencabulator|3 months ago

It's also worth noting that workerd is only a part of the Cloudflare Workers stack. It doesn't have the same security properties.

https://github.com/cloudflare/workerd#warning-workerd-is-not...

(I know you know this, but frankly you should add a disclaimer when you comment about CF or Capnp. It's too convenient for you to leave out the cons.)

tomComb|3 months ago

Workers is a v8 isolates runtime like Deno. v8 and Deno are both open source and Deno is used in a variety of platforms, including Supabase and ValTown.

It is a terrific technology, and it is reasonably portable but I think you would be better using it in something like Supabase where are the whole platform is open source and portable, if those are your goals.

imron|3 months ago

In code I’ve worked on, cold starts on AWS lambda for a rust binary that handled nontrivial requests was around 30ms.

At that point it doesn’t really matter if it’s cold start or not.

wmf|3 months ago

Workerd is already open source so that's a good start.