top | item 40992330

(no title)

conkeisterdoor | 1 year ago

This looks great! I would use this if you had a dispatcher for using a custom/local OpenAI-compatible API like eg llama.cpp server. If I can make some time I'll take a stab at writing one and submit a PR :)

discuss

order

RandomBK|1 year ago

Already exists :)

Set the `local_uri` setting in the configuration (`llm2sh --setup`), and either pass `-m local` on the CLI or set `"default_model": "local"` in the config.

PRs are always welcome.