I am worried about AWS imposing their own political rules on the models. For example, they may impose censorship, or safety requirements. It is hard for me to trust them as a central platform in this ecosystem
You can check out this technical deep dive on Serverless GPUs offerings/Pay-as-you-go way.
This includes benchmarks around cold-starts, performance consistency, scalability, and cost-effectiveness for models like Llama2 7Bn & Stable Diffusion across different providers -https://www.inferless.com/learn/the-state-of-serverless-gpus... .Can save months of your time. Do give it a read.
I'm confused, what's expensive about it? It's a serverless pay per token model?
Do you mean specifically the Bedrock Knowledgebase/RAG -- that uses serverless OpenSearch which costs at minimum $200ish/month bc it doesn't scale to zero?
I have not read too deeply into this but, do any of these serverless environments offer GPUs? I'm sure there are ... reasons but the lack of GPU support in Lambda and Fargate remains a major paint point for AWS users.
It's been keeping me wrangling EC2 instances for ML teams but I do wonder how much longer that will last.
The major clouds don't support serverless GPU because the architecture is fundamentally different from running CPU workloads. For Lambda specifically, there's no way of running multiple customer workloads on a single GPU with Firecracker.
A more general issue is that the workloads that tend to run on GPU are much bigger than a standard Lambda-sized workload (think a 20Gi image with a smorgasbord of ML libraries). I've spent time working around this problem and wrote a bit about it here: https://www.beam.cloud/blog/serverless-platform-guide
The big guys are lagging a bit, but there are many smaller parties offering serverless GPU.
I've been a quite satisfied customer of Runpod's serverless GPU offering, running a side project that uses computer vision to detect toxic clouds in webcam feeds of an industrial site.
If you want generative AI, try Replicate, as they have offer a more specialized product.
They use GPUs under the hood for inference/fine-tuning and charge by token. Fireworks will even let you deploy a Lora serverless at the same pricing as base model.
But not aware of any “lambda”-like serverless for any old CUDA workload. Given loading times, it wouldn’t really make sense. Something like CloudRun or KNative for GPUs would be cool.
ac360|1 year ago
• Devs forever want choice.
• Open-source LLMs are getting better
• Anthropic ships fantastic models
• Doesn't expose your app’s data to multiple companies
• Consolidated security, billing, config in AWS
• Power of AWS ecosystem
blackeyeblitzar|1 year ago
agcat|1 year ago
P.S: I am from Inferless
rmbyrro|1 year ago
dheerkt|1 year ago
Do you mean specifically the Bedrock Knowledgebase/RAG -- that uses serverless OpenSearch which costs at minimum $200ish/month bc it doesn't scale to zero?
scosman|1 year ago
ethagnawl|1 year ago
It's been keeping me wrangling EC2 instances for ML teams but I do wonder how much longer that will last.
Mernit|1 year ago
A more general issue is that the workloads that tend to run on GPU are much bigger than a standard Lambda-sized workload (think a 20Gi image with a smorgasbord of ML libraries). I've spent time working around this problem and wrote a bit about it here: https://www.beam.cloud/blog/serverless-platform-guide
ZeroCool2u|1 year ago
https://cloud.google.com/run/docs/configuring/services/gpu
ac360|1 year ago
isoprophlex|1 year ago
I've been a quite satisfied customer of Runpod's serverless GPU offering, running a side project that uses computer vision to detect toxic clouds in webcam feeds of an industrial site.
If you want generative AI, try Replicate, as they have offer a more specialized product.
scosman|1 year ago
But not aware of any “lambda”-like serverless for any old CUDA workload. Given loading times, it wouldn’t really make sense. Something like CloudRun or KNative for GPUs would be cool.
fitzgera1d|1 year ago
A serverless boilerplate for AI apps on trusted AWS infra.
• Full-Stack w/ Chat UI + Streaming
• Multiple LLM Models + Data Privacy
• 100% Serverless
• API + Event Architecture
• Auth, Multi-Env, GitHub Actions & more!
Github: https://github.com/serverless/aws-ai-stack
Demo: https://awsaistack.com
brap|1 year ago
eahefnawy|1 year ago
justanotheratom|1 year ago
unknown|1 year ago
[deleted]