top | item 44769430

(no title)

om8 | 7 months ago

Sure, but integrated graphics usually lacks vram for LLM inference.

discuss

order

adastra22|7 months ago

Which means that inference would be approximately the same speed (but compute offloaded) as the suggested CPU inference engine.