(no title)
detente18 | 2 years ago
There's a lot of good model deployment platforms that would make it easy to call your model behind a hosted endpoint
-- If you do want to self-host - there's some great libraries like https://github.com/lm-sys/FastChat and https://github.com/ggerganov/llama.cpp that might be helpful
If none of these really solve your issue - feel free to email me and I'm happy to help you figure something out - krrish@berri.ai
No comments yet.