top | item 44403284

(no title)

futurisold | 8 months ago

Yes, that's correct. If using say openai, then every semantic ops are API calls to openai. If you're hosting a local LLM via llama.cpp, then obviously there's no inference cost other than that of hosting the model.

discuss

order

No comments yet.