top | item 39671332

(no title)

myco_logic | 2 years ago

Depends on how beefy that laptop is...

I've been doing some local LLM stuff at work recently, and even with the amazing advances in quantization lately, doing that kind of stuff on a ThinkPad is feasible, but still strongly inferior to just renting out a VPS with a couple 4090/H100s for several hours.

The biggest thing with summarizing stuff is that most local LLM models often don't have very big context-windows, so they have trouble with larger texts like even a short Vonnegut novel (I was just testing em' with summarizing GitHub issues, and even with a 16k token context window they still sometimes struggle if there are a lot of comments).

There are probably smarter people than I who could get this working on a Raspberry Pi though... ;)

discuss

order

No comments yet.