I am looking to do simple things like image classification/text classification using APIs without running the LLMs in my local machine. What are some APIs that provide a uniform interface to access different LLMs?
Going through middle layers has pros and cons. You'll likely find better docs and experience by going through the official APIs directly. Maybe start with OpenAI and/or Claude official libraries, both provide pretty good docs.
If you really must use a wrapper, a strategy I use is to look into open-source apps I like and see what they are using. Aider for instance seems to be using this "litellm" thing... If I were to need cross API support for AI, I would probably look into that [1]. llm uses opeanai directly [2]. Etc, etc.
As far as what 'API' to use, please just coalesce around the OpenAI API for your client software. You can start up an OpenAI compatible endpoint with vLLM for example. Just stick with that. You can use LiteLLM as a proxy to convert your client side requests to whatever server side format is expected for e.g. Claude.
emmanueloga_|1 year ago
If you really must use a wrapper, a strategy I use is to look into open-source apps I like and see what they are using. Aider for instance seems to be using this "litellm" thing... If I were to need cross API support for AI, I would probably look into that [1]. llm uses opeanai directly [2]. Etc, etc.
--
1: https://github.com/Aider-AI/aider/blob/e76704e261647348fd7c1...
2: https://github.com/simonw/llm/blob/d654c9521235a737e59a4f1d7...
logankeenan|1 year ago
I also rented a GPU vm from them and ran huggingface models on it. That did require lot more coding and learning.
https://docs.runpod.io/serverless/workers/vllm/get-started
Tostino|1 year ago
anonzzzies|1 year ago
anonzzzies|1 year ago
constantinum|1 year ago
vismit2000|1 year ago
anon1094|1 year ago
Gooblebrai|1 year ago
etcd|1 year ago
PenisBanana|1 year ago