This looks great! I would use this if you had a dispatcher for using a custom/local OpenAI-compatible API like eg llama.cpp server. If I can make some time I'll take a stab at writing one and submit a PR :)
Set the `local_uri` setting in the configuration (`llm2sh --setup`), and either pass `-m local` on the CLI or set `"default_model": "local"` in the config.
RandomBK|1 year ago
Set the `local_uri` setting in the configuration (`llm2sh --setup`), and either pass `-m local` on the CLI or set `"default_model": "local"` in the config.
PRs are always welcome.
conkeisterdoor|1 year ago