top | item 44054806

(no title)

lis | 9 months ago

Yes, I agree. I've just ran the model locally and it's making a good impression. I've tested it with some ruby/rspec gotchas, which it handled nicely.

I'll give it a try with aider to test the large context as well.

discuss

order

ericb|9 months ago

In ollama, how do you set up the larger context, and figure out what settings to use? I've yet to find a good guide. I'm also not quite sure how I should figure out what those settings should be for each model.

There's context length, but then, how does that relate to input length and output length? Should I just make the numbers match? 32k is 32k? Any pointers?

zackify|9 months ago

Ollama breaks for me. If I manually set the context higher. The next api call from clone resets it back.

And ollama keeps taking it out of memory every 4 minutes.

LM studio with MLX on Mac is performing perfectly and I can keep it in my ram indefinitely.

Ollama keep alive is broken as a new rest api call resets it after. I’m surprised it’s this glitched with longer running calls and custom context length.