top | item 44489690

Mercury: Ultra-fast language models based on diffusion

576 points| PaulHoule | 8 months ago |arxiv.org

242 comments

order
[+] mike_hearn|8 months ago|reply
A good chance to bring up something I've been flagging to colleagues for a while now: with LLM agents we are very quickly going to become even more CPU bottlenecked on testing performance than today, and every team I know of today was bottlenecked on CI speed even before LLMs. There's no point having an agent that can write code 100x faster than a human if every change takes an hour to test.

Maybe I've just got unlucky in the past, but in most projects I worked on a lot of developer time was wasted on waiting for PRs to go green. Many runs end up bottlenecked on I/O or availability of workers, and so changes can sit in queues for hours, or they flake out and everything has to start again.

As they get better coding agents are going to be assigned simple tickets that they turn into green PRs, with the model reacting to test failures and fixing them as they go. This will make the CI bottleneck even worse.

It feels like there's a lot of low hanging fruit in most project's testing setups, but for some reason I've seen nearly no progress here for years. It feels like we kinda collectively got used to the idea that CI services are slow and expensive, then stopped trying to improve things. If anything CI got a lot slower over time as people tried to make builds fully hermetic (so no inter-run caching), and move them from on-prem dedicated hardware to expensive cloud VMs with slow IO, which haven't got much faster over time.

Mercury is crazy fast and in a few quick tests I did, created good and correct code. How will we make test execution keep up with it?

[+] kccqzy|8 months ago|reply
> Maybe I've just got unlucky in the past, but in most projects I worked on a lot of developer time was wasted on waiting for PRs to go green.

I don't understand this. Developer time is so much more expensive than machine time. Do companies not just double their CI workers after hearing people complain? It's just a throw-more-resources problem. When I was at Google, it was somewhat common for me to debug non-deterministic bugs such as a missing synchronization or fence causing flakiness; and it was common to just launch 10000 copies of the same test on 10000 machines to find perhaps a single digit number of failures. My current employer has a clunkier implementation of the same thing (no UI), but there's also a single command to launch 1000 test workers to run all tests from your own checkout. The goal is to finish testing a 1M loc codebase in no more than five minutes so that you get quick feedback on your changes.

> make builds fully hermetic (so no inter-run caching)

These are orthogonal. You want maximum deterministic CI steps so that you make builds fully hermetic and cache every single thing.

[+] rafaelmn|8 months ago|reply
> If anything CI got a lot slower over time as people tried to make builds fully hermetic (so no inter-run caching), and move them from on-prem dedicated hardware to expensive cloud VMs with slow IO, which haven't got much faster over time.

I am guesstimating (based on previous experience self-hosting the runner for MacOS builds) that the project I am working on could get like 2-5x pipeline performance at 1/2 cost just by using self-hosted runners on bare metal rented machines like Hetzner. Maybe I am naive, and I am not the person that would be responsible for it - but having a few bare metal machines you can use in the off hours to run regression tests, for less than you are paying the existing CI runner just for build, that speed up everything massively seems like a pure win for relatively low effort. Like sure everyone already has stuff on their plate and would rather pay external service to do it - but TBH once you have this kind of compute handy you will find uses anyway and just doing things efficiently. And knowing how to deal with bare metal/utilize this kind of compute sounds generally useful skill - but I rarely encounter people enthusiastic about making this kind of move. Its usually - hey lets move to this other service that has slightly cheaper instances and a proprietary caching layer so that we can get locked into their CI crap.

Its not like these services have 0 downtime/bug free/do not require integration effort - I just don't see why going bare metal is always such a taboo topic even for simple stuff like builds.

[+] daxfohl|8 months ago|reply
There are a couple mitigating considerations

1. As implementation phase gets faster, the bottleneck could actually switch to PM. In which case, changes will be more serial, so a lot fewer conflicts to worry about.

2. I think we could see a resurrection of specs like TLA+. Most engineers don't bother with them, but I imagine code agents could quickly create them, verify the code is consistent with them, and then require fewer full integration tests.

3. When background agents are cleaning up redundant code, they can also clean up redundant tests.

4. Unlike human engineering teams, I expect AIs to work more efficiently on monoliths than with distributed microservices. This could lead to better coverage on locally runnable tests, reducing flakes and CI load.

5. It's interesting that even as AI increases efficiency, that increased velocity and sheer amount of code it'll write and execute for new use cases will create its own problems that we'll have to solve. I think we'll continue to have new problems for human engineers to solve for quite some time.

[+] TechDebtDevin|8 months ago|reply
LLM making a quick edit, <100 lines... Sure. Asking an LLM to rubber-duck your code, sure. But integrating an LLM into your CI is going to end up costing you 100s of hours productivity on any large project. That or spend half the time you should be spending learning to write your own code, dialing down context sizing and prompt accuracy.

I really really don't understand the hubris around llm tooling, and don't see it catching on outside of personal projects and small web apps. These things don't handle complex systems well at all, you would have to put a gun in my mouth to let one of these things work on an important repo of mine without any supervision... And if I'm supervising the LLM I might as well do it myself, because I'm going to end up redoing 50% of its work anyways..

[+] piva00|8 months ago|reply
I haven't worked in places using off-the-shelf/SaaS CI in more than a decade so I feel my experience has been quite the opposite from yours.

We always worked hard to make the CI/CD pipeline as fast as possible. I personally worked on those kind of projects at 2 different employers as a SRE: a smaller 300-people shop which I was responsible for all their infra needs (CI/CD, live deployments, migrated later to k8s when it became somewhat stable, at least enough for the workloads we ran, but still in its beta-days), then at a different employer some 5k+ strong working on improving the CI/CD setup which used Jenkins as a backend but we developed a completely different shim on top for developer experience while also working on a bespoke worker scheduler/runner.

I haven't experienced a CI/CD setup that takes longer than 10 minutes to run in many, many years, got quite surprised reading your comment and feeling spoiled I haven't felt this pain for more than a decade, didn't really expect it was still an issue.

[+] grogenaut|8 months ago|reply
Before cars people spent little on petroleum products or motor oil or gasoline or mechanics. Now they do. That's how systems work. You wanna go faster well you need better roads, traffic lights, on ramps, etc. you're still going faster.

Use AI to solve the IP bottlenecks or build more features that ear more revenue that buy more ci boxes. Same as if you added 10 devs which you are with AI so why wouldn't some of the dev support costs go up.

Are you not in a place where you can make an efficiency argument to get more ci or optimize? What's a ci box cost?

[+] pamelafox|8 months ago|reply
For Python apps, I've gotten good CI speedups by moving over to the astral.sh toolchain, using uv for the package installation with caching. Once I move to their type-checker instead of mypy, that'll speed the CI up even more. The playwright test running will then probably be the slowest part, and that's only in apps with frontends.

(Also, Hi Mike, pretty sure I worked with you at Google Maps back in early 2000s, you were my favorite SRE so I trust your opinion on this!)

[+] droopyEyelids|8 months ago|reply
In most companies the CI/Dev Tools team is a career dead end. There is no possibility to show a business impact, it's just a money pit that leadership can't/won't understand (and if they do start to understand it, then it becomes _their_ money pit, which is a career dead end for them) So no one who has their head on straight wants to spend time improving it.

And you can't even really say it's a short sighted attitude. It definitely is from a developer's perspective, and maybe it is for the company if dev time is what decides the success of the business overall.

[+] hansvm|8 months ago|reply
- Just spin up more test instances. If the AI is as good as people claim then it's still way cheaper than extra programmers.

- Write fast code. At $WORK we can test roughly a trillion things per CPU physical core year for our primary workload, and that's in a domain where 20 microsecond processing time is unheard of. Orders of magnitude speed improvements pay dividends quickly.

- LLMs don't care hugely about the language. Avoid things like rust where compile times are always a drag.

- That's something of a strange human problem you're describing. Once the PR is reviewed, can't you just hit "auto-merge" and go to the next task, only circling back if the code was broken? Why is that a significant amount of developer time?

- The thing you're observing is something every growing team witnesses. You can get 90% of the way to what you want by giving the build system a greenfield re-write. If you really have to run 100x more tests, it's worth a day or ten sanity checking docker caching or whatever it is your CI/CD is using. Even hermetic builds have inter-run caching in some form; it's just more work to specify how the caches should work. Put your best engineer on the problem. It's important.

- Be as specific as possible in describing test dependencies. The fastest tests are the ones which don't run.

- Separate out unit tests from other forms of tests. It's hard to write software operating with many orders of magnitude of discrepancies, and tests are no exception. Your life is easier if conceptually they have a separate budget (e.g., continuous fuzz testing or load testing or whatever). Unit tests can then easily be fast enough for a developer to run all the changed ones on precommit. Slower tests are run locally when you think they might apply. The net effect is that you don't have the sort of back-and-forth with your CI that actually causes lost developer productivity because the PR shouldn't have a bunch of bullshit that's green locally and failing remotely.

[+] SoftTalker|8 months ago|reply
Wow, your story gives me flashbacks to the 1990s when I worked in a mainframe environment. Compile jobs submitted by developers were among the lowest priorities. I could make a change to a program, submit a compile job, and wait literally half a day for it to complete. Then I could run my testing, which again might have to wait for hours. I generally had other stuff I could work on during those delays but not always.
[+] mrkeen|8 months ago|reply
> Maybe I've just got unlucky in the past, but in most projects I worked on a lot of developer time was wasted on waiting for PRs to go green. Many runs end up bottlenecked on I/O or availability of workers

No, this is common. The devs just haven't grokked dependency inversion. And I think the rate of new devs entering the workforce will keep it that way forever.

Here's how to make it slow:

* Always refer to "the database". You're not just storing and retrieving objects from anywhere - you're always using the database.

* Work with statements, not expressions. Instead of "the balance is the sum of the transactions", execute several transaction writes (to the database) and read back the resulting balance. This will force you to sequentialise the tests (simultaneous tests would otherwise race and cause flakiness) plus you get to write a bunch of setup and teardown and wipe state between tests.

* If you've done the above, you'll probably need to wait for state changes before running an assertion. Use a thread sleep, and if the test is ever flaky, bump up the sleep time and commit it if the test goes green again.

[+] pplonski86|8 months ago|reply
We write and run tests to build trust in our code changes. But maybe tests aren’t the only way to achieve that trust.

When I was younger, I had a friend who was a senior software engineer. I remember he would make changes to production systems without even running the application locally or executing any tests, and yet his changes never failed. The team had a high level of trust in all his code changes.

[+] theptip|8 months ago|reply
This might end up being less of an issue.

If I am coding, I want to stay in the flow and get my PR green asap, so I can continue on the project.

If I am orchestrating agents, I might have 10 or 100 PRs in the oven. In that case I just look at the ones that finish CI.

It’s gonna be less, or at least different, kind of flow IMO. (Until you can just crank out design docs and whiteboard sessions and have the agents fully autonomously get their work green.)

[+] TheDudeMan|8 months ago|reply
This is because coders didn't spend enough time making their tests efficient. Maybe LLM coding agents can help with that.
[+] dmitrycube|8 months ago|reply
100% agree.

One of the core premises of what we've been trying to do with our product (Testkube) is to decouple Testing from CI/CD's. Those were never built with testing in mind, let alone scaling to 100's or 1000's of efficient executions. We have a light weight open-source agent, which lives inside a K8s cluster, tests are stored as CRD's cloned from your GIT, executed as K8's jobs. Create whatever heuristics or parallelization necessary, leverage the power of K8s to dynamically scale compute resources as needed, trigger executions by whatever means (GitHub Actions, K8s' events, schedule, etc.), do it on your existing infra.

Admittedly, we don't solve the test creation problem. If there are new tools out there which could automagically generate tests along with code, please share.

[+] Art9681|8 months ago|reply
Any modern MacBook can run those tests 100x faster than the crappy cloud runners most companies use. You can also configure runners that run locally and get the benefit of those speed gains. So all of this is really a business and technical problem that is solved for those who want to solve it. It can be solved very cheap, or it can be solved very expensive. Regardless, it's precisely those types of efficiency gains that motivate companies to finally do something about it.

And if not, then enjoy being paid waiting for CI to go green. Maybe it's a reminder to go take a break.

It will be worse when the process is super optimized and the expectation changes. So now instead of those 2 PRs that went to prod today because everyone knows CI takes forever, you'll be expected to push 8 because in our super optimized pipeline it only takes seconds. No excuses. Now the bottleneck is you.

[+] drzaiusx11|8 months ago|reply
The nice part about most CI workloads is that they can almost always be split up and executed in parallel. Make sure you're utilizing every core on every CI worker and your worker pools are appropriately sized for the workload. Use spot instances and add auto scaling where it makes sense. No one should be waiting more than a few minutes for a PR build. Exception being compile time which can vary significantly between languages. I have a couple projects that are stuck on ancient compilers because of CPU architecture and C variant, so those will always be a dog without effort to move to something better. Ymmv
[+] ASinclair|8 months ago|reply
Call me a skeptic but I do not believe LLMs are significantly altering the time between commits so much that CI is the problem.

However, improving CI performance is valuable regardless.

[+] trhway|8 months ago|reply
>There's no point having an agent that can write code 100x faster than a human if every change takes an hour to test.

Testing every change incrementally is a vestige of the code being done by humans (and thus of the current approach where AI helps and/or replaces one given human), in small increments at that, and of the failures being analyzed by individual humans who can keep in their head only limited number of things/dependencies at once.

[+] mathiaspoint|8 months ago|reply
Good God I hate CI. Just let me run the build automation myself dammit! If you're worried about reproducibility make it reproducible and hash the artifacts, make people include the hash in the PR comment if you want to enforce it.

The amount of time people waste futzing around in eg Groovy is INSANE and I'm honestly inclined to reject job offers from companies that have any serious CI code at this point.

[+] mdnahas|8 months ago|reply
We don’t. We switch to proven-correct code. Languages like Lean, Coq, and Idris allow proofs of correctness for code. The LLM can generate proofs for most of the correctness conditions.

CI is still needed for performance, UI testing, etc. but it can have a much smaller role than it does now.

[+] blitzar|8 months ago|reply
Yet, now I have added a LLM workflow to my coding the value of my old and mostly useless workflows is now 10x'd.

Git checkpoints, code linting and my naive suite of unit and integration tests are now crucial to my LLM not wasting too much time generating total garbage.

[+] elbear|8 months ago|reply
CI should just run on each developer's machine. As in, each developer should have a local instance of the CI setup in a VM or a docker container. If tests pass, the result is reported to a central server.
[+] vjerancrnjak|8 months ago|reply
It’s because people don’t know how to write tests. All of the “don’t do N select queries in a for loop” comments made in PRs are completely ignored in tests.

Each test can output many db queries. And then you create multiple cases.

People don’t even know how to write code that just deals with N things at a time.

I am confident that tests run slowly because the code that is tested completely sucks and is not written for batch mode.

Ignoring batch mode, tests are most of the time written in a a way where test cases are run sequentially. Yet attempts to run them concurrently result in flaky tests, because the way you write them and the way you design interfaces does not allow concurrent execution at all.

Another comment, code done by the best AI model still sucks. Anything simple, like a music player with a library of 10000 songs is something it can’t do. First attempt will be horrible. No understanding of concurrent metadata parsing, lists showing 10000 songs at once in UI being slow etc.

So AI is just another excuse for people writing horrible code and horrible tests. If it’s so smart , try to speed up your CI with it.

[+] rapind|8 months ago|reply
> This will make the CI bottleneck even worse.

I agree. I think there are potentially multiple solutions to this since there are multiple bottlenecks. The most obvious is probably network overhead when talking to a database. Another might be storage overhead if storage is being used.

Frankly another one is language. I suspect type-safe, compiled, functional languages are going to see some big advantages here over dynamic interpreted languages. I think this is the sweet spot that grants you a ton of performance over dynamic languages, gives you more confidence in the models changes, and requires less testing.

Faster turn-around, even when you're leaning heavily on AI, is a competitive advantage IMO.

[+] yieldcrv|8 months ago|reply
then kill the CI/CD

these redundant processes are for human interoperability

[+] gdiamos|8 months ago|reply
This sounds like a strawman.

GPUs can do 1 million trillion instructions per second.

Are you saying it’s impossible to write a test that finishes in less than one second on that machine?

Is that a fundamental limitation or an incredibly inefficient test?

[+] true_blue|8 months ago|reply
I tried the playground and got a strange response. I asked for a regex pattern, and the model gave itself a little game-plan, then it wrote the pattern and started to write tests for it. But it never stopped writing tests. It continued to write tests of increasing size until I guess it reached a context limit and the answer was canceled. Also, for each test it wrote, it added a comment about if the test should pass or fail, but after about the 30th test, it started giving the wrong answer for those too, saying that a test should fail when actually it should pass if the pattern is correct. And after about the 120th test, the tests started to not even make sense anymore. They were just nonsense characters until the answer got cut off.

The pattern it made was also wrong, but I think the first issue is more interesting.

[+] mxs_|8 months ago|reply
In their tech report, they say this is based on:

> "Our methods extend [28] through careful modifications to the data and computation to scale up learning."

[28] is Lou et al. (2023), the "Score Entropy Discrete Diffusion" (SEDD) model (https://arxiv.org/abs/2310.16834).

I wrote the first (as far as I can tell) independent from-scratch reimplementation of SEDD:

https://github.com/mstarodub/dllm

My goal was making it as clean and readable as possible. I also implemented the more complex denoising strategy they described (but didn't implement).

It runs on a single GPU in a few hours on a toy dataset.

[+] fastball|8 months ago|reply
ICYMI, DeepMind also has a Gemini model that is diffusion-based[1]. I've tested it a bit and while (like with this model) the speed is indeed impressive, the quality of responses was much worse than other Gemini models in my testing.

[1] https://deepmind.google/models/gemini-diffusion/

[+] mtillman|8 months ago|reply
Ton of performance upside in most GPU adjacent code right now.

However, is this what arXiv is for? It seems more like marketing their links than research. Please correct me if I'm wrong/naive on this topic.

[+] chc4|8 months ago|reply
Using the free playground link, and it is in fact extremely fast. The "diffusion mode" toggle is also pretty neat as a visualization, although I'm not sure how accurate it is - it renders as line noise and then refines, while in reality presumably those are tokens from an imprecise vector in some state space that then become more precise until it's only a definite word, right?
[+] icyfox|8 months ago|reply
Some text diffusion models use continuous latent space but they historically haven't done that well. Most the ones we're seeing now typically are trained to predict actual token output that's fed forward into the next time series. The diffusion property comes from their ability to modify previous timesteps to converge on the final output.

I have an explanation about one of these recent architectures that seems similar to what Mercury is doing under the hood here: https://pierce.dev/notes/how-text-diffusion-works/

[+] cavisne|8 months ago|reply
Are there any rules for what can be uploaded to arxiv?

This is a marketing page turned into a PDF, I guess who cares but could someone upload like a facebook marketplace listing screenshotted into a PDF?

[+] M4v3R|8 months ago|reply
I am personally very excited for this development. Recently I AI-coded a simple game for a game jam and half the time was spent waiting for the AI agent to finish its work so I can test it. If instead of waiting 1-2 minutes for every prompt to be executed and implemented I could wait 10 seconds instead that would be literally game changing. I could test 5-10 different versions of the same idea in the time it took me to test one with the current tech.

Of course this model is not as advanced yet for this to be feasible, but so was Claude 3.0 just over a year ago. This will only get better over time I’m sure. Exciting times ahead of us.

[+] gdiamos|8 months ago|reply
I think the LLM dev community is underestimating these models. E.g. there is no LLM inference framework that supports them today.

Yes the diffusion foundation models have higher cross entropy. But diffusion LLMs can also be post trained and aligned, which cuts the gap.

IMO, investing in post training and data is easier than forcing GPU vendors to invest in DRAM to handle large batch sizes and forcing users to figure out how to batch their requests by 100-1000x. It is also purely in the hands of LLM providers.

[+] amelius|8 months ago|reply
Damn, that is fast. But it is faster than I can read, so hopefully they can use that speed and turn it into better quality of the output. Because otherwise, I honestly don't see the advantage, in practical terms, over existing LLMs. It's like having a TV with a 200Hz refresh rate, where 100Hz is just fine.
[+] ceroxylon|8 months ago|reply
The output is very fast but many steps backwards in all of my personal benchmarks. Great tech but not usable in production when it is over 60% hallucinations.
[+] mike_hearn|8 months ago|reply
That might just depend on how big it is/how much money was spent on training. The neural architecture can clearly work. Beyond that catching up may be just a matter of effort.
[+] mseri|8 months ago|reply
Sounds all cool and interesting, however:

> By submitting User Submissions through the Services, you hereby do and shall grant Inception a worldwide, non-exclusive, perpetual, royalty-free, fully paid, sublicensable and transferable license to use, edit, modify, truncate, aggregate, reproduce, distribute, prepare derivative works of, display, perform, and otherwise fully exploit the User Submissions in connection with this site, the Services and our (and our successors’ and assigns’) businesses, including without limitation for promoting and redistributing part or all of this site or the Services (and derivative works thereof) in any media formats and through any media channels (including, without limitation, third party websites and feeds), and including after your termination of your account or the Services. For clarity, Inception may use User Submissions to train artificial intelligence models. (However, we will not train models using submissions from users accessing our Services via OpenRouter.)

[+] armcat|8 months ago|reply
I've been looking at the code on their chat playground, https://chat.inceptionlabs.ai/, and they have a helper function `const convertOpenAIMessages = (convo) => { ... }`, which also contains `models: ['gpt-3.5-turbo']`. I also see in API response: `"openai": true`. Is it actually using OpenAI, or is it actually calling its dLLM? Does anyone know?

Also: you can turn on "Diffusion Effect" in the top-right corner, but this just seems to be an "animation gimmick" right?

[+] mynti|8 months ago|reply
is there a kind of nanogpt for diffusion language models? i would love to understand them better
[+] EigenLord|8 months ago|reply
Diffusion is just the logically most optimally behavior for searching massively parallel spaces without informed priors. We need to think beyond language modeling however and start to view this in terms of drug discovery etc. A good diffusion model + the laws of chemistry could be god-tier. I think language modeling has the AI community's in its grips right now and they aren't seeing the applications of the same techniques to real world problems elsewhere.
[+] ianbicking|8 months ago|reply
For something a little different than a coding task, I tried using it in my game: https://www.playintra.win/ (in settings you can select Mercury, the game uses OpenRouter)

At first it seemed pretty competent and of course very fast, but it seemed to really fall apart as the context got longer. The context in this case is a sequence of events and locations, and it needs to understand how those events are ordered and therefore what the current situation and environment are (though there's also lots of hints in the prompts to keep it focused on the present moment). It's challenging, but lots of smaller models can pull it off.

But also a first release and a new architecture. Maybe it just needs more time to bake (GPT 3.5 couldn't do these things either). Though I also imagine it might just perform _differently_ from other LLMs, not really on the same spectrum of performance, and requiring different prompting.

[+] numpad0|8 months ago|reply
Is parameter count published? I'm by no means expert, but failure modes remind me of Chinese 1B class models.