(no title)
raphaelj | 7 months ago
While Mistral might not have the best LLM performances, their UX is IMO the best, or at least a tie with OpenAI's:
- I never had any UI bug, while these were common with Claude or OpenAI (e.g. a discussion disappearing, LLM crashing mid-answer, long context errors on Claude ...);
- They support most of the features I liked from OpenAI, such as libraries and projects;
- Their app is by far the fastest, thanks to their fast reply feature;
- They allow you to disable web-search.
mark_l_watson|7 months ago
Enough! I just paid for a year of Gemini Pro, I use gemini-cli for free for small sessions, turn on using my API key for longer sessions to avoid timeout, and most importantly: for API use I mostly just use Gemini 2.5-flash, sometimes -pro, and Moonshot’s Kimi K2. I also use local models on Ollama when they are sufficient (which is surprisingly often.)
I simply decided that I no longer wanted the hobby of always trying everything. I did look again at Mistral a few weeks ago, a good option, but Google was a good option for me.