rjhacks's comments

rjhacks | 1 year ago | on: Ask HN: Founders who offer free/OS and paid SaaS, how do you manage your code?

Thanks for sharing, such an interesting approach! At my startup we're currently discussing licenses, this is very inspiring. I have a follow-up question!

The reasons for OSS you list include "Bus-Factor", "Longevity", and "Continuity". I'd summarize all of those as "even if they can't do business with [company] anymore, users can continue on" - our customers also say that's very important to them.

... But what if "can continue on" means "need some of those proprietary features"? And you're not there to sell to them anymore? Or you've been acquired by private equity, started charging 10^6x, and users want out? Users aren't allowed to clone the repo, remove your proprietary code, and reimplement it with their own solution, because:

> you may not remove or obscure any functionality in the software that is protected by the license key.

Is this a thing your customers are concerned about? What do you expect them to do in such a scenario?

rjhacks | 5 years ago | on: CoScreen: Screen Sharing for Engineers

For the CoScreen team, I'm curious how the existing competition (https://tuple.app/, https://screen.so/) factors in to your decision to build/position this - do you believe you have a different vision than they do? Features they can't match? A better team that will out-execute? Out-market?

Startup founders are often told not to launch a product that has direct competitors already, so I'm curious to hear your take!

rjhacks | 5 years ago | on: New Compute Engine A2 VMs–First Nvidia Ampere A100 GPUs in the Cloud

I understand it's exciting to see introductions of new machine types and new GPUs, but for it to mean anything Google should instead get its house in order on the GPUs they already offer. Getting an n1 instance with a Tesla T4 GPU in any datacenter I've tried has a <50% success rate on any given day ("resource unavailable" more often than not, they just don't seem to have enough of them), which is _hugely_ damaging to our ability to rely on the cloud for our workload. Worse, there's no way for me to work around it: I'd be willing to switch zones, or machine type, or GPU type, but there is no dashboard or support guidance that'll tell me if there's any such configuration that'll be reliably available.

Because of that, seeing this A100 announcement is just a bummer, as I fear it'll be just another "resource unavailable" GPU...

rjhacks | 7 years ago | on: Cloud Run beta pricing

How much CPU is allocated to a Cloud Run (not-on-GKE) instance while a request is active? I see that memory is configurable (up to 2G), but no word about CPU...

I have a compute-heavy and bursty workload that I'd _love_ to put on Cloud Run, but it's important to know a ballpark for the CPU I'll get to spend on my requests.

Second question: any plans to more officially support "background" workloads that consume off e.g. Pub/Sub and might be able to use cheaper preemptible compute? I guess I'm probably already able to point a Pub/Sub push queue at a Cloud Run endpoint, but having the option of cheaper (autoscale-to-zero) compute for my not-latency-sensitive work would be awesome.

rjhacks | 8 years ago | on: Launch HN: EnvKey (YC W18) – Smart Configuration and Secrets Management

I think the use case where the per-config-request pricing model could break is with serverless (like Lambda or Cloud Functions or ...). Secret management there is super important, and your solution looks great, except that what makes these platforms great for low-cost deployments is that they "scale to 0" (turn off your instances when there's no traffic), which means that each instance potentially lives just briefly. Since each start of each instance is a config request, that could in some cases become problematic.

That said, at your lowest tier you could have ~20 moments of "booting a new instance" per hour. That's probably plenty for most cases, although you may find users for whom it isn't enough.

page 1