top | item 43098440

(no title)

1 points| gopisuvanam | 1 year ago

discuss

order

gopisuvanam|1 year ago

# LLM in the borwser using WebLLM

This is an example of how an LLM can be used safely in the browser without the need for servers or external services. This proves one can use AI in edge computing. This particular notebook requires GPU with atleast 8GB of VRAM. This is an amazing way of using LLMs without installing anything. Works across most modern browsers!!!

Credits: [Web LLM Github](https://github.com/mlc-ai/web-llm)

Note: The notebook takes 3-4 four minutes to load the LLM for the first time. Next time onwards it will only take 25 seconds as the model is cached. Simple queries are taking 3-5 seconds.