top | item 47076040

(no title)

KerrAvon | 10 days ago

Is there a reliable way to run MLX models? On my M1 Max, LM Studio seems to output garbage through the API server sometimes even when the LM Studio chat with the same model is perfectly fine. llama.cpp variants generally always just work.

discuss

order

No comments yet.