top | item 42795983

(no title)

hazelnut | 1 year ago

How does it compare to WebLLM (https://github.com/mlc-ai/web-llm)?

discuss

order

sauravpanda|1 year ago

We use Webllm under the hood and for text-to-text generation, the model compression is awesome and RAM usage is also less. But we are conducting more experiments, One thing we noticed is some quantized models using MLC sometimes start throwing gibberish, so will get back to you after more experiments on which is better.