top | item 43495622

(no title)

betimsl | 11 months ago

Check out NVIDIAs latest releases. Paying for tokens is going to be a history in about 6 months. You run the model on your laptop.

Maybe you're right about the confusion...but given the velocity, that's going to be fixed also.

All the knowledge about the field of programming is digitized, one could argue that having a model that digested all that information in a right way, is better than separate.

Just a thought. I don't care all that much.

discuss

order

sestinj|11 months ago

Absolutely +1 to the progress of local models! We hope Continue is and continues to be a great place to use them. Tons of blocks in the Ollama page for example that can be used: https://hub.continue.dev/ollama