top | item 41938930

(no title)

jackb4040 | 1 year ago

> You won't be able to run Skyvern unless you enable at least one provider.

Any plans on bundling a local LLM / supporting local LLMs?

discuss

order

suchintan|1 year ago

We have an open issue for this right now -- we would LOVE some contributions here. The biggest problem until Llama 3.2 came out was that most (good) open source llms were text-only, and Skyvern needs vision to perform well

This isn't true anymore -- we just need to build and launch support for it

socksy|1 year ago

In theory to support ollama all you should need to do is be able to change the URL that would otherwise go to OpenAI, and select the model. The only gotcha is that the llama3.2 builds for ollama are currently text only — however they've just added support for arbitrary hugging face models so you're not limited by the officially supported models.