top | item 39119764

(no title)

skottenborg | 2 years ago

Cool! Can webLLM handle inference of models with any meaningful size?

Can I ask what model is used?

discuss

order

hmdai|2 years ago

Thanks! It's using Llama 2 7B, It supports bigger models but those take longer to download and also infer (if run at all depending on the device)