top | item 24894275

(no title)

acnops | 5 years ago

You can look at the GPU costs for GCP here https://cloud.google.com/compute/gpus-pricing. You also have the instance cost, and other costs like storage, transmission, etc.

It's a pretty costly operation since the marginal cost is very high compared to other SaaS products. We'll probably iterate on the pricing model.

What do you think a good model would be?

discuss

order

ackbar03|5 years ago

I don't really have a good solution for it. I've toyed around with deep learning based SaaS type projects though so I know cloud server costs are a major factor.

What sort of worked was I routed the data so the inference ran on my own local server (i.e. my work station), although the downside is you can't use your GPU for other stuff. At the time I looked into it, 3 months of cloud server costs were enough to buy your GPU work station. Also tried to do whatever I could to optimize inference.

I have no idea what kind of volume your getting but I imagine itd be even worse for video based Gan stuff. Maybe go for quality an target super high end? You probably have a much better idea than me.

All the best, will be interested to know how it goes