Yes, it's incredible boring to wait for the AI Agents in IDEs to finish their job. I get distracted and open YouTube. Once I gave a prompt so big and complex to Cline it spent 2 straight hours writing code.
But after these 2 hours I spent 16 more tweaking and fixing all the stuff that wasn't working. I now realize I should have done things incrementally even when I have a pretty good idea of the final picture.
I've been more and more only using the "thinking" models of o3 in ChatGPT, and Gemini / Claude in IDEs. They're slower, but usually get it right.
But at the same time I am open to the idea that speed can unlock new ways of using the tooling. It would still be awesome to basically just have a conversation with my IDE while I am manually testing the app. Or combine really fast models like this one with a "thinking background" one, that would runs for seconds/minutes but try to catch the bugs left behind.
So my personal belief is that diffusion models will enable higher degrees of accuracy. This is because unlike an auto-regressive model it can adjust a whole block of tokens when it encounters some kind of disjunction.
Think of the old example where an auto regressive model would output: "There are 2 possibilities.." before it really enumerated them. Often the model has trouble overcoming the bias and will hallucinate a response to fit the proceeding tokens.
Chain of thought and other approaches help overcome this and other issues by incentivizing validation, etc.
With diffusion however it is easier for the other generated answer to change that set of tokens to match the actual number of possibilities enumerated.
This is why I think you'll see diffusion models be able to do some more advanced problem solving with a smaller number of "thinking" tokens.
Check out RooCode if you haven’t. There’s an orchestrator mode that can start with a big model to come up with a plan and break down, then spin out small tasks to smaller models for scoped implementation.
Wouldn't it be possible to trade speed back for accuracy, e.g. by asking the model to look at a problem from different angles, let it criticize its own output, etc.?
I think speed and convenience are essential. I use chat gpt desktop for coding. Not because it's the best but because it's fast and easy and doesn't interrupt my flow too much. I mostly stick to the 4o model. I only use the o3 model when I really have to. Because at that point getting an answer is slooooow. 4o is more than good enough most of the time.
And more importantly it's a simple option+shift+1 away. I simply type something like "fix that" and it has all the context it needs to do its thing. Because it connects to my IDE and sees my open editor and the highlighted line of code that is bothering me. If I don't like the answer, I might escalate to o3 sometimes. Other models might be better but they don't have the same UX. Claude desktop is pretty terrible, for example. I'm sure the model is great. But if I have to spoon feed it everything it's going to annoy me.
What I'd love is for smaller faster models to be used by default and for them to escalate to slower more capable models on a need to have basis only. Using something like o3 by default makes no sense. I don't want to have to think about which model is optimal for what question. The problem of figuring out what model is best to use is a much simpler one than answering my questions. And automating that decision opens the doors to having a multitude of specialized models.
> Not sure if I would tradeoff speed for accuracy.
Are you, though?
There are obvious examples of obtaining speed without losing accuracy, like using a faster processor with bigger caches, or more processors.
Or optimizing something without changing semantics, or the safety profile.
Slow can be unreliable; a 10 gigabit ethernet can be more reliable than a 110 baud acoustically-coupled modem in mean time between accidental bit flips.
Here, the technique is different, so it is apples to oranges.
Could you tune the LLM paradigm so that it gets the same speed, and how accurate would it be?
These models do not reason. They do not calculate. They perform no objectivity whatsoever.
Instead, these models show us what is most statistically familiar. The result is usually objectively sound, or at least close enough that we can rewrite it as something that is.
I don't use the best available models for prototyping because it can be expensive or more time consuming. This innovation makes prototyping faster and practicing prompts on slightly lower accuracy models can provide more realistic expectations.
The excitement for me is the implications for lower energy models. Tech like this could thoroughly break the Nvidia stranglehold at least for some segments
If the benchmarks aren't lying, Mercury Coder Small is as smart as 4o mini and costs the same, but is order of magnitude faster when outputting (unclear if pre-output delay is notably different). Pretty cool. However, I'm under the impression that 4o-mini was superceded by 4.1-mini and 4.1-nano for all use cases (correct me if I'm wrong). Unfortunately they didn't publish comparisons with the 4.1 line, which feels like an attempt to manipulate the optics. Or am I misreading this?
Btw, why call it "coder"? 4o-mini level of intelligence is for extracting structured data and basic summaries, definitely not for coding.
It appears to be purpose-trained for coding. They also have a generalist model, but that's not the one being compared.
I agree, the comparison is dated, cherry-picked and doesn't reference the thinking models people do use for coding.
But it's also a bit of a new architecture in early stages of development/testing. Comparing against other small non-thinking models is a good step. It demonstrates the strategy is viable and worth exploring. Time will tell its value. Perhaps a guiding LLM could lean on diffusion to speed up generation. Perhaps we'll see more mixed-architecture models. Perhaps diffusion beats out current LLMs, but from my armchair this seems unlikely.
Saw another on Twitter past few days that looked like a better contender to Mercury, doesn't look like it got posted to LocalLLaMa, and I can't find it now. Very exciting stuff
Nice writeup! This is the second post I've seen in the genre of "I've had a secret, personal benchmark for LLMs where the 'solution' requires questioning the premises, and o4-mini-high beats it." The first post I saw was about a chessboard and the prompt "mate in one:" https://x.com/KelseyTuoc/status/1912945346126417940
(Edited to remove direct spoiler for the MU-puzzle, in case people want to try it.)
It's nice to see a team doing something different.
The cost[1] is US$1.00 per million output tokens and US$0.25 per million input tokens. By comparison, Gemini 2.5 Flash Preview charges US$0.15 per million tokens for text input and $0.60 (non-thinking) output[2].
Hmmm... at those prices they need to focus on markets where speed is especially important, eg high-frequency trading, transcription/translation services and hardware/IoT alerting!
I would be extremely hesitant to assume a direct relationship between pricing and cost. A behemoth like Google is very willing to take significant losses for years to grow market share. Back in 2014-2015 Uber often charged less than the Boston subway, but it always cost them MUCH more under the hood. AFAIK they're still not profitable.
Chinese companies will be similarly eager for market share, but not everyone has the access to the same raw capital.
I just tried giving it a coding snippet that has a bug. ChatGPT & Claude found the bug instantly. Mercury fails to find it even after several reprompts (it's hallucinating). On the upside it is significantly faster. That's promising since the edge for ChatGPT and Claude are in the prolonged time and energy they've spent building training infrastructure, tooling, datasets, etc to pump out models with high task performance.
Keep in mind this release was never intended to prove superiority. Rather, it shows an alternative structure with some promising performance characteristics. More work needs to be done to show real application, but this very valuable learning.
That's part of the reason to compare against older, smaller models since they're at a more comparable stage of development.
Depends on the shape of the cup! You can contrive a cup shaped like an exponentially flaring horn, where adding the milk increases the volume a little, which massively increases the surface area, and so leads to faster cooling. Or you can have a cup with a converging top, like a brandy glass, where adding the milk reduces the surface area, and makes cooling even slower.
To determine which option cools coffee the most, I'll analyze the heat transfer physics involved.
The key insight is that the rate of heat loss depends on the temperature difference between the coffee and the surrounding air. When the coffee is hotter, it loses heat faster.
Option 1 (add milk first, then wait):
- Adding cold milk immediately lowers the coffee temperature right away
- The coffee then cools more slowly during the 2-minute wait because the temperature difference with the environment is smaller
Option 2 (wait first, then add milk):
- The hot coffee cools rapidly during the 2-minute wait due to the large temperature difference
- Then the cold milk is added, creating an additional temperature drop at the end
Option 2 will result in the lowest final temperature. This is because the hotter coffee in option 2 loses heat more efficiently during the waiting period (following Newton's Law of Cooling), and then gets the same cooling benefit from the milk addition at the end.
The mathematical principle behind this is that the rate of cooling is proportional to the temperature difference, so keeping the coffee hotter during the waiting period maximizes heat loss to the environment.
Hmm a good nerd-snipe puzzle. I was never very good at physics, so hopefully someone can check my work... assuming upon mixing coffee is at Tc and milk at Tm, and simplifying to assume equivalent mass & specific temp we have (Tf - Tc) = -(Tf - Tm) => Tf = (Tc+Tm)/2 which is intuitive (upon mixing we get the average temperature).
On the assumption that the cold milk is always at a fixed temperature until it's mixed in, then the temperature of coffee at point of mixing is the main factor. Before and after, it follows newton's law of cooling. So we're comparing something like Tenv + [(Tc+Tm)/2 - Tenv]e^(-2) vs (Tenv + [Tc - Tenv]e^(-2) + Tm)/2. The latter is greater than the former only when Tm > Tenv (the milk isn't cold), or in other words it's better to let the coffee cool as much as possible before mixing assuming the milk is colder than the environment.
Another interesting twist is to consider the case where the milk isn't kept at a fixed temperature but is also subject to warming (it's taken out of the fridge). Then the former equation is unchanged but the latter becomes (Tenv + [Tc - Tenv]e^(-2) + Tenv + [Tm - Tenv]e^(-2))/2. But this is equivalent to the former equation, so in this case it doesn't matter when you mix it.
Not 100% confident in both analysis, but I wonder if there's a more intuitive way to see it. I also don't know if deviating from the assumption of equivalent mass & specific temp changes the analysis (it might lead to a small range where for the fixed case, situation 1 is better?) It's definitely not "intuitive" to me.
The puzzle assumes that the room temperature is greater than the cold milk's temperature. When I added that the room temperature is, say, -10 °C, Mercury fails to see the difference.
Okay, let's break this down using the principle of heat transfer (specifically Newton's Law of Cooling):
Heat loss is faster when the temperature difference is greater. A very hot object loses heat to cool surroundings much faster than a warm object.
Option 1: Add milk immediately.
You instantly lower the coffee's temperature by mixing it with cold milk.
Now, this warmer (not hot) mixture sits for 2 minutes. Because the temperature difference between the mixture and the room is smaller, it cools down more slowly over those 2 minutes.
Option 2: Let it sit for 2 mins, then add milk.
The very hot coffee sits for 2 minutes. Because the temperature difference between the hot coffee and the room is large, it loses heat more quickly during these 2 minutes.
After 2 minutes of rapid cooling, you then add the cold milk, lowering the temperature further.
Conclusion:
To get the coffee to the lowest temperature, you should choose Option 2: Let it sit for 2 mins, then add the cold milk.
For me, ChatGPT (the free version, GPT-4o mini I believe?) gets it right, choosing option 2 because the coffee will cool faster due to the larger temperature difference.
Unless there's a gotcha somewhere in your prompt that I'm missing, like what if the temperature of the room is hotter than the coffee, or so cold that the coffee becomes colder than the milk, or something?
I would be suprised if any models get it wrong, since I assume it shows up in training data a bunch?
> Mercury gets this right - while as of right now ChatGPT 4o get it wrong.
This is so common a puzzle it's discussed all over the internet.
It's in the data used to build the models.
What's so impressive about a machine that can spit out something easily found with a quick web search?
I had it write a Python program to calculate disk usage by directory -- basically a `du` clone. It was astonishly fast (2s) and correct. I've tried other models which have got it wrong, slow, and they've ignored my instructions to use topdown=False in the call to walk().
It's kind of weird to think that in a coding assistant, an LLM is regularly asked to produce a valid block of code top to bottom, or repeat a section of code with changes, when that's not what we do. (There are other intuitively odd things about this, like the amount of compute spent generating 'easy' tokens, e.g. repeating unchanged code.) Some of that might be that models are just weird and intuition doesn't apply. But maybe the way we do it--jumping around, correcting as we go, etc.--is legitimately an efficient use of effort, and a model could do its job better, with less effort, or both if it too used some approach other than generating the whole sequence start-to-finish.
There's already stuff in the wild moving that direction without completely rethinking how models work. Cursor and now other tools seem to have models for 'next edit' not just 'next word typed'. Agents can edit a thing and then edit again (in response to lints or whatever else); approaches based on tools and prompting like that can be iterated on without the level of resources needed to train a model. You could also imagine post-training a model specifically to be good at producing edit sequences, so it can actually 'hit backspace' or replace part of what it's written if it becomes clear it wasn't right, or if two parts of the output 'disagree' and need to be reconciled.
> an LLM is regularly asked to produce a valid block of code top to bottom, or repeat a section of code with changes, when that's not what we do.
Eh, it's mostly what we do. We don't re-type everything every time, but we do type top-to-bottom when we type. As you later mentioned, "next edit" models really strike that balance, and they're like 50% of the value I derive from a tool like Cursor.
I'd love to see more diff-outputs instead of "retyping" everything (with a nice UI for the humans). I suspect that part of the reason we have these "inhuman" actions is because of the chat interface we've been using has lead to certain outputs being more desirable due to the medium.
Looks interesting, and my intuition is that code is a good application of diffusion LLMs, especially if they get support for "constrained generation", as there's already plenty of tooling around code (linters and so on).
Something I don't see explored in their presentation is the ability of the model to restore from errors / correct itself. SotA LLMs shine at this, a few back and forth w/ sonnet / gemini pro / etc really solves most problems nowadays.
Anybody able to get the "View Technical Report" button at the bottom to do anything? I was curious to glean more details but it doesn't work on either of my devices.
I'm curious what level of detail they're comfortable publishing around this, or are they going full secret mode?
>Instead of generating tokens one at a time, a dLLM produces the full answer at once. The initial answer is iteratively refined through a diffusion process, where a transformer suggests improvements for the entire answer at once at every step. In contrast to autoregressive transformers, the later tokens don’t causally depend on the earlier ones (leaving aside the requirement that the text should look coherent). For an intuition of why this matters, suppose that a transformer model has 50 layers and generates a 500-token reasoning trace, the final token of this trace being the answer to the question. Since information can only move vertically and diagonally inside this transformer and there are fewer layers than tokens, any computations made before the 450th token must be summarized in text to be able to influence the final answer at the last token. Unless the model can perform effective steganography, it had better output tokens that are genuinely relevant for producing the final answer if it wants the performed reasoning to improve the answer quality. For a dLLM generating the same 500-token output, the earlier tokens have no such causal role, since the final answer isn’t autoregressively conditioned on the earlier tokens. Thus, I’d expect it to be easier for a dLLM to fill those tokens with post-hoc rationalizations.
>Despite this, I don’t expect dLLMs to be a similarly negative development as Huginn or COCONUT would be. The reason is that in dLLMs, there’s another kind of causal dependence that could prove to be useful for interpreting those models: the later refinements of the output causally depend on the earlier ones. Since dLLMs produce human-readable text at every diffusion iteration, the chains of uninterpretable serial reasoning aren’t that deep. I’m worried about the text looking like gibberish at early iterations and the reasons behind the iterative changes the diffusion module makes to this text being hard to explain, but the intermediate outputs nevertheless have the form of human-readable text, which is more interpretable than long series of complex matrix multiplications.
Based solely on the above, my armchair analysis is that it seems like it's not strictly diffusion in the Langevin diffusion/denoising sense (since there are discrete iteration rounds), but instead borrows the idea of "iterative refinement". You drop the causal masking and token-by-token autoregressive generation, and instead start with a bunch of text and propose a series of edits at each step? On one hand dropping the causal masking over token sequence means that you don't have an objective that forces the LLM to learn a representation sufficient to "predict" things as normally thought, but on the flipside there is now a sort of causal masking over _time_, since each iteration depends on the previous. It's a neat tradeoff.
There are so many models. Every single day half a dozen new models land. And even more papers.
It feels like models are becoming fungible apart from the hyperscaler frontier models from OpenAI, Google, Anthropic, et al.
I suppose VCs won't be funding many more "labs"-type companies or "we have a model" as the core value prop companies? Unless it has a tight application loop or is truly unique?
Disregarding the team composition, research background, and specific problem domain - if you were starting an AI company today, what part of the stack would you focus on? Foundation models, AI/ML infra, tooling, application layer, ...?
Where does the value accrue? What are the most important problems to work on?
Word on the street is a lot of money is going into vertical application AI companies this season. Makes sense - the bitter lesson means capturing a market and proprietary data is a good play, while frontier models keep getting better at using what you (and only you) own.
I would be interested to see how people would apply this working as a coding assistant. For me, its application in solutioning seem very strong, particularly vibe coding, and potentially agentic coding. One of my main gripes with LLM-assisted coding is that for me to get the output which catches all scenarios I envision takes multiple attempts in refining my prompt requiring regeneration of the output. Iterations are slow and often painful.
With the speed this can generate its solutions, you could have it loop through attempting the solution, feeding itself the output (including any errors found), and going again until it builds the "correct" solution.
I basically did this with aider and Gemini 2.5 few days ago and was blown away. Basically talked about the project structure, let it write the final plan to file CONVENTIONS.md that gets automatically attached to the context, then kept asking "What should we do next" until tests were ready, and then I just ran a loop where it modifies the code and I press Return to run the tests and add the output to prompt and let it go again.
About 10 000 lines of code, and I only intervened a few times, to revert few commits and once to cut a big file to smaller ones so we could tackle the problems one by one.
I did not expect LLMs to be able to do this so soon. But I just commented to say about aider - the iteration loop really was mostly me pressing return. Especially in the navigator mode PR, as it automatically looked up the correct files to attach to the context
This sounds like a neat idea but it seems like bad timing. OpenAI just released token-based that beats the best diffusion image generation. If diffusion isn't even the best at generating images, I don't know if I'm going to spend a lot of time evaluating it for text.
Speed is great but it doesn't seem like other text-based model trends are going to work out of the box, like reasoning. So you have to get dLLMs up to the quality of a regular autoregressive LLM and then you need to innovate more to catch up to reasoning models, just to match the current state of the art. It's possible they'll get there, but I'm not optimistic.
I just tried it and it was able to perfectly generate a piece of code for me that i needed for generating a 12 month rolling graph based on a list of invoices and it seemed a bit easier and faster then chatgpt.
>Mercury is up to 10x faster than frontier speed-optimized LLMs. Our models run at over 1000 tokens/sec on NVIDIA H100s, a speed previously possible only using custom chips.
This means on custom chips (Cerebras, Graphcore, etc...) we might see 10k-100k tokens/sec? Amazing stuff!
Also of note, funny how text generation started w/ autoregression/tokens and diffusion seems to perform better, while image generation went the opposite way.
After reviewing what they have on their playground, this thing seems to be a scam.
They're running Qwen on a traditional LLM pipeline. The "diffusion effect", as it says there, it's just a decorative, lmao. That in itself shouldn't break the deal as I understand you have to put on a show, but, looking at the latency and timing of their outputs this is not a diffusion model, as they claim. They're also not even close to the 1,000 TPS figure they put out.
I'm surprised nobody on this forum got the slightest clue on that. I guess I should 4x my fee again :).
I'd hope that with diffusion, it would be able to go back and forth between parts of the output to adjust issues with part of the output which it had previously generated. This would not be possible with a purely sequential model.
However,
> Prompt: Write a sentence with ten words which has exactly as many r’s in the first five words as in the last five
This is awesome for the future of autocomplete. Current models aren't fast enough to give useful suggestions at the speed that I type - but this certainly is.
That said, token-based models are currently fast enough for most real-time chat applications, so I wonder what other use-cases there will be where speed is greatly prioritized over smarts. Perhaps trading on Trump tweets?
Would have been nice if along to this demo video[1] comparing speed of 3 models, they would have share the artifacts as well, so we can compare quality.
This is genius! There are tradeoffs between diffusion and neural network models in image generation so why not use diffusion models in text generation? Excited to see where this ends up and I wouldn't be surprised if we saw some of these types of models appear in the future updates to popular families like Llama or Qwen.
Interesting approach. However, I never thought of auto regression being _the_ current issue with language modeling. If anything it seems the community was generally surprised just how far next "token" prediction took us. Remember back when we did char generating RNNs and were impressed they could make almost coherent sentences?
Diffusion is an alternative but I am having a hard time understanding the whole "built in error correction" that sounds like marketing BS. Both approaches replicate probability distributions which will be naturally error-prone because of variance.
Consider the entropy of the distribution of token X in these examples:
"Four X"
and
"Four X and seven years ago".
In the first case X could be pretty much anything, but in the second case we both know the only likely completion.
So it seems like there would be a huge advantage in not having to run autogressively. But in practice it's less significant then you might imagine because the AR model can internally model the probability of X conditioned on the stuff it hasn't output yet, and in fact because without reinforcement the training causes it converge on the target probability of the whole output, the AR model must do some form of lookahead internally.
(That said RLHF seems to break this product of the probabilities property pretty badly, so maybe it will be the case that diffusion will suffer less intelligence loss ::shrugs::).
The linked page only compares to very old and very small models. But the pricing is higher even than the latest Gemini Flash 2.5 model, which performs far better than anything they compare to.
Their pockets are probably not as deep as Google's in terms of willingness to burn cash for market share.
If speed is your most important metric, I could still see there being a niche for this.
From a pure VC perspective though, I wonder if they'd be better off Open Sourcing their model to get faster innovation + centralization like Llama has done. (Or Mistral with keeping some models private, some public.)
Use it as marketing, get your name out there, and have people use your API when they realize they don't want to deal with scaling AI compute themselves lol
this convo has me rethinking how much speed actually matters vs just getting stuff right - you think most problems are just about better habits or purely tooling upgrades at this point?
Super happy to see something like this getting traction. As someone that is trying to reduce my carbon footprint sometimes I feel bad about asking any model to do something trivial. With something like that perhaps the guilt will lessen
If you live in the U.S., marginal electricity demand during the day is almost invariably met with solar or wind (solar typically runs at a huge surplus on sunny days). Go forth and AI in peace, marcyb5st.
To put this into perspective, driving for an hour in an electric car (15kW avg consumption) consumes about as much energy as 50,000 chatgpt queries [0]
Running your laptop for an hour would be around 100 queries.
inerte|10 months ago
Yes, it's incredible boring to wait for the AI Agents in IDEs to finish their job. I get distracted and open YouTube. Once I gave a prompt so big and complex to Cline it spent 2 straight hours writing code.
But after these 2 hours I spent 16 more tweaking and fixing all the stuff that wasn't working. I now realize I should have done things incrementally even when I have a pretty good idea of the final picture.
I've been more and more only using the "thinking" models of o3 in ChatGPT, and Gemini / Claude in IDEs. They're slower, but usually get it right.
But at the same time I am open to the idea that speed can unlock new ways of using the tooling. It would still be awesome to basically just have a conversation with my IDE while I am manually testing the app. Or combine really fast models like this one with a "thinking background" one, that would runs for seconds/minutes but try to catch the bugs left behind.
I guess only giving a try will tell.
XenophileJKO|10 months ago
Think of the old example where an auto regressive model would output: "There are 2 possibilities.." before it really enumerated them. Often the model has trouble overcoming the bias and will hallucinate a response to fit the proceeding tokens.
Chain of thought and other approaches help overcome this and other issues by incentivizing validation, etc.
With diffusion however it is easier for the other generated answer to change that set of tokens to match the actual number of possibilities enumerated.
This is why I think you'll see diffusion models be able to do some more advanced problem solving with a smaller number of "thinking" tokens.
tyre|10 months ago
amelius|10 months ago
kadushka|10 months ago
jillesvangurp|10 months ago
And more importantly it's a simple option+shift+1 away. I simply type something like "fix that" and it has all the context it needs to do its thing. Because it connects to my IDE and sees my open editor and the highlighted line of code that is bothering me. If I don't like the answer, I might escalate to o3 sometimes. Other models might be better but they don't have the same UX. Claude desktop is pretty terrible, for example. I'm sure the model is great. But if I have to spoon feed it everything it's going to annoy me.
What I'd love is for smaller faster models to be used by default and for them to escalate to slower more capable models on a need to have basis only. Using something like o3 by default makes no sense. I don't want to have to think about which model is optimal for what question. The problem of figuring out what model is best to use is a much simpler one than answering my questions. And automating that decision opens the doors to having a multitude of specialized models.
kazinator|10 months ago
Are you, though?
There are obvious examples of obtaining speed without losing accuracy, like using a faster processor with bigger caches, or more processors.
Or optimizing something without changing semantics, or the safety profile.
Slow can be unreliable; a 10 gigabit ethernet can be more reliable than a 110 baud acoustically-coupled modem in mean time between accidental bit flips.
Here, the technique is different, so it is apples to oranges.
Could you tune the LLM paradigm so that it gets the same speed, and how accurate would it be?
otabdeveloper4|10 months ago
Or just save yourself the time and money and code it yourself like it's 2020.
(Unless it's your employer paying for this waste, in which case go for it, I guess.)
cedws|10 months ago
Is this really what people are doing these days?
thomastjeffery|10 months ago
These models do not reason. They do not calculate. They perform no objectivity whatsoever.
Instead, these models show us what is most statistically familiar. The result is usually objectively sound, or at least close enough that we can rewrite it as something that is.
kittikitti|10 months ago
g-mork|10 months ago
dmos62|10 months ago
Btw, why call it "coder"? 4o-mini level of intelligence is for extracting structured data and basic summaries, definitely not for coding.
kmacdough|10 months ago
I agree, the comparison is dated, cherry-picked and doesn't reference the thinking models people do use for coding.
But it's also a bit of a new architecture in early stages of development/testing. Comparing against other small non-thinking models is a good step. It demonstrates the strategy is viable and worth exploring. Time will tell its value. Perhaps a guiding LLM could lean on diffusion to speed up generation. Perhaps we'll see more mixed-architecture models. Perhaps diffusion beats out current LLMs, but from my armchair this seems unlikely.
g-mork|10 months ago
Saw another on Twitter past few days that looked like a better contender to Mercury, doesn't look like it got posted to LocalLLaMa, and I can't find it now. Very exciting stuff
freeqaz|10 months ago
https://www.reddit.com/media?url=https://i.redd.it/xci0dlo7h...
m-hodges|10 months ago
To transform the string "AB" to "AC" using the given rules, follow these steps:
1. *Apply Rule 1*: Add "C" to the end of "AB" (since it ends in "B"). - Result: "ABC"
2. *Apply Rule 4*: Remove the substring "CC" from "ABC". - Result: "AC"
Thus, the series of transformations is: - "AB" → "ABC" (Rule 1) - "ABC" → "AC" (Rule 4)
This sequence successfully transforms "AB" to "AC".
¹ https://matthodges.com/posts/2025-04-21-openai-o4-mini-high-...
invalidroot|10 months ago
(Edited to remove direct spoiler for the MU-puzzle, in case people want to try it.)
schappim|10 months ago
The cost[1] is US$1.00 per million output tokens and US$0.25 per million input tokens. By comparison, Gemini 2.5 Flash Preview charges US$0.15 per million tokens for text input and $0.60 (non-thinking) output[2].
Hmmm... at those prices they need to focus on markets where speed is especially important, eg high-frequency trading, transcription/translation services and hardware/IoT alerting!
1. https://files.littlebird.com.au/Screenshot-2025-05-01-at-9.3...
2. https://files.littlebird.com.au/pb-IQYUdv6nQo.png
kmacdough|10 months ago
Chinese companies will be similarly eager for market share, but not everyone has the access to the same raw capital.
dvdhs|10 months ago
jbellis|10 months ago
vlovich123|10 months ago
kmacdough|10 months ago
That's part of the reason to compare against older, smaller models since they're at a more comparable stage of development.
jonplackett|10 months ago
You have 2 minutes to cool down a cup of coffee to the lowest temp you can
You have two options:
1. Add cold milk immediately, then let it sit for 2 mins.
2. Let it sit for 2 mins, then add the cold milk.
Which one cools the coffee to the lowest temperature and why?
And Mercury gets this right - while as of right now ChatGPT 4o get it wrong.
So that’s pretty impressive.
twic|10 months ago
jefftk|10 months ago
To determine which option cools coffee the most, I'll analyze the heat transfer physics involved. The key insight is that the rate of heat loss depends on the temperature difference between the coffee and the surrounding air. When the coffee is hotter, it loses heat faster. Option 1 (add milk first, then wait):
- Adding cold milk immediately lowers the coffee temperature right away
- The coffee then cools more slowly during the 2-minute wait because the temperature difference with the environment is smaller
Option 2 (wait first, then add milk):
- The hot coffee cools rapidly during the 2-minute wait due to the large temperature difference
- Then the cold milk is added, creating an additional temperature drop at the end
Option 2 will result in the lowest final temperature. This is because the hotter coffee in option 2 loses heat more efficiently during the waiting period (following Newton's Law of Cooling), and then gets the same cooling benefit from the milk addition at the end. The mathematical principle behind this is that the rate of cooling is proportional to the temperature difference, so keeping the coffee hotter during the waiting period maximizes heat loss to the environment.
krackers|10 months ago
On the assumption that the cold milk is always at a fixed temperature until it's mixed in, then the temperature of coffee at point of mixing is the main factor. Before and after, it follows newton's law of cooling. So we're comparing something like Tenv + [(Tc+Tm)/2 - Tenv]e^(-2) vs (Tenv + [Tc - Tenv]e^(-2) + Tm)/2. The latter is greater than the former only when Tm > Tenv (the milk isn't cold), or in other words it's better to let the coffee cool as much as possible before mixing assuming the milk is colder than the environment.
Another interesting twist is to consider the case where the milk isn't kept at a fixed temperature but is also subject to warming (it's taken out of the fridge). Then the former equation is unchanged but the latter becomes (Tenv + [Tc - Tenv]e^(-2) + Tenv + [Tm - Tenv]e^(-2))/2. But this is equivalent to the former equation, so in this case it doesn't matter when you mix it.
Not 100% confident in both analysis, but I wonder if there's a more intuitive way to see it. I also don't know if deviating from the assumption of equivalent mass & specific temp changes the analysis (it might lead to a small range where for the fixed case, situation 1 is better?) It's definitely not "intuitive" to me.
maytc|10 months ago
The puzzle assumes that the room temperature is greater than the cold milk's temperature. When I added that the room temperature is, say, -10 °C, Mercury fails to see the difference.
byearthithatius|10 months ago
Okay, let's break this down using the principle of heat transfer (specifically Newton's Law of Cooling):
Conclusion:To get the coffee to the lowest temperature, you should choose Option 2: Let it sit for 2 mins, then add the cold milk.
drusepth|10 months ago
crazygringo|10 months ago
Unless there's a gotcha somewhere in your prompt that I'm missing, like what if the temperature of the room is hotter than the coffee, or so cold that the coffee becomes colder than the milk, or something?
I would be suprised if any models get it wrong, since I assume it shows up in training data a bunch?
adammarples|10 months ago
cratermoon|10 months ago
> Mercury gets this right - while as of right now ChatGPT 4o get it wrong.
This is so common a puzzle it's discussed all over the internet. It's in the data used to build the models. What's so impressive about a machine that can spit out something easily found with a quick web search?
emmelaich|10 months ago
behnamoh|10 months ago
twotwotwo|10 months ago
There's already stuff in the wild moving that direction without completely rethinking how models work. Cursor and now other tools seem to have models for 'next edit' not just 'next word typed'. Agents can edit a thing and then edit again (in response to lints or whatever else); approaches based on tools and prompting like that can be iterated on without the level of resources needed to train a model. You could also imagine post-training a model specifically to be good at producing edit sequences, so it can actually 'hit backspace' or replace part of what it's written if it becomes clear it wasn't right, or if two parts of the output 'disagree' and need to be reconciled.
From a quick search it looks like https://arxiv.org/abs/2306.05426 in 2023 discussed backtracking LLMs and https://arxiv.org/html/2410.02749v3 / https://github.com/upiterbarg/lintseq trained models on synthetic edit sequences. There is probably more out there with some digging. (Not the same topic, but the search also turned up https://arxiv.org/html/2504.20196 from this Monday(!) about automatic prompt improvement for an internal code-editing tool at Google.)
vineyardmike|10 months ago
Eh, it's mostly what we do. We don't re-type everything every time, but we do type top-to-bottom when we type. As you later mentioned, "next edit" models really strike that balance, and they're like 50% of the value I derive from a tool like Cursor.
I'd love to see more diff-outputs instead of "retyping" everything (with a nice UI for the humans). I suspect that part of the reason we have these "inhuman" actions is because of the chat interface we've been using has lead to certain outputs being more desirable due to the medium.
NitpickLawyer|10 months ago
Something I don't see explored in their presentation is the ability of the model to restore from errors / correct itself. SotA LLMs shine at this, a few back and forth w/ sonnet / gemini pro / etc really solves most problems nowadays.
freeqaz|10 months ago
I'm curious what level of detail they're comfortable publishing around this, or are they going full secret mode?
albertzeyer|10 months ago
But all but the first page seems to be missing in this PDF? There is just an abstract and (partial) outline.
krackers|10 months ago
>Instead of generating tokens one at a time, a dLLM produces the full answer at once. The initial answer is iteratively refined through a diffusion process, where a transformer suggests improvements for the entire answer at once at every step. In contrast to autoregressive transformers, the later tokens don’t causally depend on the earlier ones (leaving aside the requirement that the text should look coherent). For an intuition of why this matters, suppose that a transformer model has 50 layers and generates a 500-token reasoning trace, the final token of this trace being the answer to the question. Since information can only move vertically and diagonally inside this transformer and there are fewer layers than tokens, any computations made before the 450th token must be summarized in text to be able to influence the final answer at the last token. Unless the model can perform effective steganography, it had better output tokens that are genuinely relevant for producing the final answer if it wants the performed reasoning to improve the answer quality. For a dLLM generating the same 500-token output, the earlier tokens have no such causal role, since the final answer isn’t autoregressively conditioned on the earlier tokens. Thus, I’d expect it to be easier for a dLLM to fill those tokens with post-hoc rationalizations.
>Despite this, I don’t expect dLLMs to be a similarly negative development as Huginn or COCONUT would be. The reason is that in dLLMs, there’s another kind of causal dependence that could prove to be useful for interpreting those models: the later refinements of the output causally depend on the earlier ones. Since dLLMs produce human-readable text at every diffusion iteration, the chains of uninterpretable serial reasoning aren’t that deep. I’m worried about the text looking like gibberish at early iterations and the reasons behind the iterative changes the diffusion module makes to this text being hard to explain, but the intermediate outputs nevertheless have the form of human-readable text, which is more interpretable than long series of complex matrix multiplications.
Based solely on the above, my armchair analysis is that it seems like it's not strictly diffusion in the Langevin diffusion/denoising sense (since there are discrete iteration rounds), but instead borrows the idea of "iterative refinement". You drop the causal masking and token-by-token autoregressive generation, and instead start with a bunch of text and propose a series of edits at each step? On one hand dropping the causal masking over token sequence means that you don't have an objective that forces the LLM to learn a representation sufficient to "predict" things as normally thought, but on the flipside there is now a sort of causal masking over _time_, since each iteration depends on the previous. It's a neat tradeoff.
Subthread https://news.ycombinator.com/item?id=43851429 also has some discussion
echelon|10 months ago
It feels like models are becoming fungible apart from the hyperscaler frontier models from OpenAI, Google, Anthropic, et al.
I suppose VCs won't be funding many more "labs"-type companies or "we have a model" as the core value prop companies? Unless it has a tight application loop or is truly unique?
Disregarding the team composition, research background, and specific problem domain - if you were starting an AI company today, what part of the stack would you focus on? Foundation models, AI/ML infra, tooling, application layer, ...?
Where does the value accrue? What are the most important problems to work on?
vessenes|10 months ago
jtonz|10 months ago
With the speed this can generate its solutions, you could have it loop through attempting the solution, feeding itself the output (including any errors found), and going again until it builds the "correct" solution.
bayesianbot|10 months ago
About 10 000 lines of code, and I only intervened a few times, to revert few commits and once to cut a big file to smaller ones so we could tackle the problems one by one.
I did not expect LLMs to be able to do this so soon. But I just commented to say about aider - the iteration loop really was mostly me pressing return. Especially in the navigator mode PR, as it automatically looked up the correct files to attach to the context
jbellis|10 months ago
parsimo2010|10 months ago
Speed is great but it doesn't seem like other text-based model trends are going to work out of the box, like reasoning. So you have to get dLLMs up to the quality of a regular autoregressive LLM and then you need to innovate more to catch up to reasoning models, just to match the current state of the art. It's possible they'll get there, but I'm not optimistic.
jonplackett|10 months ago
I wonder if the same would be true for a multi-modal diffusion model that can now also speak?
orbital-decay|10 months ago
jakeinsdca|10 months ago
moralestapia|10 months ago
This means on custom chips (Cerebras, Graphcore, etc...) we might see 10k-100k tokens/sec? Amazing stuff!
Also of note, funny how text generation started w/ autoregression/tokens and diffusion seems to perform better, while image generation went the opposite way.
moralestapia|10 months ago
They're running Qwen on a traditional LLM pipeline. The "diffusion effect", as it says there, it's just a decorative, lmao. That in itself shouldn't break the deal as I understand you have to put on a show, but, looking at the latency and timing of their outputs this is not a diffusion model, as they claim. They're also not even close to the 1,000 TPS figure they put out.
I'm surprised nobody on this forum got the slightest clue on that. I guess I should 4x my fee again :).
agnishom|10 months ago
However,
> Prompt: Write a sentence with ten words which has exactly as many r’s in the first five words as in the last five
>
> Response: Rapidly running, rats rush, racing, racing.
rfv6723|10 months ago
o4 mini
https://chatgpt.com/share/681315c2-aa90-800d-b02d-c3ba653281...
pants2|10 months ago
That said, token-based models are currently fast enough for most real-time chat applications, so I wonder what other use-cases there will be where speed is greatly prioritized over smarts. Perhaps trading on Trump tweets?
tzury|10 months ago
[1] https://framerusercontent.com/assets/cWawWRJn8gJqqCGDsGb2gN0...
kittikitti|10 months ago
StriverGuy|10 months ago
mlsu|10 months ago
unknown|10 months ago
[deleted]
badmonster|10 months ago
lostmsu|10 months ago
carterschonwald|10 months ago
byearthithatius|10 months ago
Diffusion is an alternative but I am having a hard time understanding the whole "built in error correction" that sounds like marketing BS. Both approaches replicate probability distributions which will be naturally error-prone because of variance.
nullc|10 months ago
"Four X"
and
"Four X and seven years ago".
In the first case X could be pretty much anything, but in the second case we both know the only likely completion.
So it seems like there would be a huge advantage in not having to run autogressively. But in practice it's less significant then you might imagine because the AR model can internally model the probability of X conditioned on the stuff it hasn't output yet, and in fact because without reinforcement the training causes it converge on the target probability of the whole output, the AR model must do some form of lookahead internally.
(That said RLHF seems to break this product of the probabilities property pretty badly, so maybe it will be the case that diffusion will suffer less intelligence loss ::shrugs::).
strangescript|10 months ago
ZeroTalent|10 months ago
sujayk_33|10 months ago
rfv6723|10 months ago
Groq is heading to a dead end.
unknown|10 months ago
[deleted]
jph00|10 months ago
freeqaz|10 months ago
If speed is your most important metric, I could still see there being a niche for this.
From a pure VC perspective though, I wonder if they'd be better off Open Sourcing their model to get faster innovation + centralization like Llama has done. (Or Mistral with keeping some models private, some public.)
Use it as marketing, get your name out there, and have people use your API when they realize they don't want to deal with scaling AI compute themselves lol
vineyardmike|10 months ago
They're comparing against the fastest models. That's why smaller models are shown.
jbellis|10 months ago
good-luck86523|10 months ago
High tech US service industry exports are cooked.
stats111|10 months ago
gitroom|10 months ago
mackepacke|10 months ago
marcyb5st|10 months ago
whall6|10 months ago
mmoskal|10 months ago
[0] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...
ris|10 months ago