top | item 43111474

Mistral's Le Chat tops 1M downloads in just 14 days

69 points| oliverchan2024 | 1 year ago |techcrunch.com

22 comments

order

anon373839|1 year ago

Mistral's partnership with Cerebras for inference hardware has received less commentary than I expected. They're basically blowing the competition out of the water, with Le Chat getting 1,100+ tokens per second of per-user throughput.

Mekoloto|1 year ago

Thats just crazy.

I'm curious when someone will do the right experiment in a way that some LLM on Cerebras will do the reasoning so well so big so fast, that it does something very novel

conradfr|1 year ago

It should be noted that as a customer of the French ISP Free you get a one year free subscription of Le Chat Pro (Free CEO Xavier Niel is an investor).

That probably helped downloads.

jiehong|1 year ago

The information from the content of the article is not much different from the title.

I think I wasted my time reading it this time. Just my opinion.

kolinko|1 year ago

[deleted]

mrtksn|1 year ago

The Le Chat Web UI, after having some code and text generated, slowed down to unusable levels for me(the UI itself, probably has some JS code that goes through all the DOM every time). That's why I downloaded the app.

Generally, I feel like all the AI models are about the same at this point. Grok in Twitter has the ability to access real time events information but the rest seems to be interchangeable at this point.

I pay for ChatGPT for higher usage limits, then use all the rest for different things in order to keep history for different things separated(not because one is better than the other in the smartness department).

ArtTimeInvestor|1 year ago

Why is it not on the LLM leaderboard?

https://lmarena.ai/?leaderboard

Do they not take part, or is the list not complete?

Reubend|1 year ago

They're on there, but using the model name rather than the service name. For instance, "Mistral-Large-2407" has an elo of 1252 at the time of writing.

ggm|1 year ago

I have found testing coding prompts in mistral and Claude lets me pick, they differ in some details of how to implement my goals (python3, numpy, matplotlib, json, requests sourced data, CSV handling, linear regression)

They are similar speed. I am probably travelling the well worn road so in some equivalent of the LRU cache

darthrupert|1 year ago

Just switched my paid plan over from chatgpt to mistral for the warm fuzzy feelings. C'est genial!

drpossum|1 year ago

I stopped doing business with Mistral when I got an API subscription and then watched one of their devs break and try to fix their oauth live over several hours over what clearly was something they didn't bother trying in a non-prod environment.

mgnn|1 year ago

It's funny. Le Chat's gaslighting me in French. Claims Mistral develops ChatGPT when answering in French, but OpenAI when in English.

For your amusement too: https://imgur.com/EgmQ0Ph

Marlinski|1 year ago

Mistral is great, I love their image generation and speed at which it replies. They really don't benefit as much hype from the others contenders but it feels like they are the silent undertaker.

112233|1 year ago

Yes, Flux Ultra is great, too bad they do not allow to access the "raw" mode.

Here is me trying (and finally succeeding) to persuade Le Chat to generate image using filename as a prompt...

https://chat.mistral.ai/chat/9940f6bf-b2e5-4db2-bb64-adcbd9f...

I mean... "pretty please" as a debugging technique. I kind of do not look forward to my future conversations with tea kettle and door knob.