> FastRender may not be a production-ready browser, but it represents over a million lines of Rust code, written in a few weeks, that can already render real web pages to a usable degree
I feel that we continue to miss the forest for the trees. Writing (or generating) a million lines of code in Rust should not count as an achievement in and of itself. What matters is whether those lines build, function as expected (especially in edge cases) and perform decently. As far as I can tell, AI has not been demonstrated to be useful yet at those three things.
SLOC was a bad indicator 20 years ago and it is today. Don't tell them - once they realize it's a red flag for us they will use some other metric, because they fight for our attention.
To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files [...]
Despite the codebase size, new agents can still understand it and make meaningful progress. Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts.
The point is that the agents can comprehend the huge amount of code generated and continue to meaningfully contribute to the goal of the project. We didn't know if that was possible. They wanted to find out. Now we have a data point.
Also, a popular opinion on any vibecoding discussion is that AI can help, but only on greenfield, toy, personal projects. This experiment shows that AI agents can work together on a very complex codebase with ambitious goals. Looks like there was a human plus 2,000 agents, in two months. How much progress do you think a project with 2,000 engineers can achieve in the first two months?
> What matters is whether those lines build, function as expected (especially in edge cases) and perform decently. As far as I can tell, AI has not been demonstrated to be useful yet at those three things.
They did build. You can give it a try. They did function as expected. How many edge cases would you like it to pass? Perform decently? How could you tell if you didn't try?
simonw, I find it almost shocking how you had the chance to talk directly with the engineer who built this, and even when he directly says things that contradict what Cursor's own CEO said, you didn't push back a single iota.
Is the takeaway here that it's fine for a CEO to claim "it even has a custom JS VM!" on Twitter/X, then afterwards the engineer explains: "The JavaScript engine isn’t working yet" and "the agents decided to pause it", and this is all OK? Not a single pushback about this very obvious contradiction? This is just one example of many, and again, since it seems to be repeated: no, no one thinks this was supposed to rival Chrome, what a trite way of trying to change the narrative.
I understand you don't want to spook future potential interviewees, but damn if that didn't feel like you suddenly are trying to defend Cursor here, instead of being curious about what actually happened. It doesn't feel curious, it feels like we're all giving up the fight against unneeded hype, exaggeration and degradation of quality.
What happened with balanced perspectives, where we don't just take people for their words, and when we notice something is off, we bring it up?
On a separate note, I actually emailed Wilson Lin too, asking if I could ask questions about it. While he initially accepted, I never actually received any answers. I'm glad to you were able to get someone from Cursor to clarify a bit at least, even though we're still just scratching the surface. I just wish we had a bit more integrity in the ecosystem and community I guess.
1) The CEO said there was a JS engine, but it didn't work.
2) It didn't build when they published the blog post.
Therefore it lacks integrity! Except that it built (I took Simon's words for it), and building a browser is beside the point, there are a few other big projects listed (Java LSP, Windows 7 emulator, Excel, etc.)
The blog stated:
"Our goal is to understand how far we can push the frontier of agentic coding for projects that typically take human teams months to complete.
This post describes what we've learned from running hundreds of concurrent agents on a single project, coordinating their work, and watching them write over a million lines of code and trillions of tokens."
They didn't set the goal of building a browser. It's an experiment about coordinating AI agents within a context of a complex software project, yet you complained they exaggerating about a JS engine?
The blog post itself is one of the first that describes a large scale experiment of agents, what works, what doesn't. There is very little hype. They didn't say it's game changing or Cursor is the best AI tool.
Honestly, grilling him about what the CEO had tweeted didn't even cross my mind.
I wanted to get to the truth of what had actually been built and how. If that contradicts what the CEO said then great, the truth is now out there - anyone is free to call that out and use my video as ammunition.
> We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week.
> It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.
> It kind of works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly.
This doesn't strike me as the world's most dishonest tweet, though it exaggerates what was achieved. There IS a JS VM in there but it's feature-flagged off. The from-scratch is misleading because there are libraries handling certain aspects - most notably Taffy - which we discussed in the interview.
I just ran "cloc" and to my surprise it counted 3,036,403 (I had thought the 3M was an exaggeration) though only 1,658,651 of that was Rust.
"It kind of works" is a fair assessment IMO!
I don't think "Let's talk about your CEO exaggerating what you built on Twitter" would have added much to the interview.
I did make sure to go over the controversies I thought were material to the project, which is why I dug into the dependencies and talked about QuickJS and Taffy.
Is this the project announced a week or two ago by an AI company claiming they had built a browser but it turned out to be a crappy wrapper around Servo that didn’t even build? Or is this another one? I thought it was Anthropic but this says Cursor.
> Last week Cursor published Scaling long-running autonomous coding, an article describing their research efforts into coordinating large numbers of autonomous coding agents. One of the projects mentioned in the article was FastRender, a web browser they built from scratch using their agent swarms. I wanted to learn more so I asked Wilson Lin, the engineer behind FastRender, if we could record a conversation about the project. That 47 minute video is now available on YouTube. I’ve included some of the highlights below.
It is the same project, but my impression is that HN exaggerated many of the issues with it.
For example:
- They did eventually get it to build. Unknown to me: were the agents working on it able to build it, or were they blindly writing code? The codebase can't have been _that_ broken since it didn't take long for them to get it buildable, and they'd produced demo screenshots before that.
- It had a dependency on QuickJS, but also a homegrown JS implementation; apparently (according to this post) QuickJS was intended as a placeholder. I have no idea which, if either, ended up getting used, though I suspect it may not even matter for the static screenshots they were showing off (the sites may not have required JS to show that).
- Some of the dependencies (like Skia and HarfBuzz) are libraries that other browsers also depend on and are not part of browser projects themselves.
- Other dependencies probably shouldn't have been used, but they only represent a fraction of what a browser has to do.
However…
What I don't know, and seemingly nobody else knows, is how functional the rest of the codebase is. It's apparently very slow and fails to render most websites. But is this more like "lots of bugs, but a solid basis", or is it more like "cargo-culted slop; even the stuff that works only works by chance"? I hope someone investigates.
It took 2M years for the monkeys to produce typewriters and Shakespeare. Now the task is to make monkeys which can do the same in many orders of magnitude shorter time.
The reaction to this would have been different two or three years ago but it looks extremely lame when you open January 2026 hackernews and this is the kind of thing a tech company is trying to persuade you into thinking is exciting or useful.
You've heard about what people are doing in the medical industry. Using AI to accelerate diagnosis and analysis of biological material. In astronomy it's showing us things that no human had ever seen before. You hear about all these things changing the world at large and the smaller worlds of individual people and families.
Then you look at the actual IT industry and we've got... some premade libraries duct taped together into a crappy browser that barely works. Of course when the value of this is compared to the cost, the response is that it's fine because it was never actually intended to be useful in the first place. Well we're actually a step ahead of you there.
The phrase "high on their own supply" describes all the people involved in this very well. I assure you we understand the goal of this project perfectly. It just wasn't a good, worthy, or even interesting goal. The immense amount of resources that went into this should have gone into something better. That's all there is to it.
"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should."
I'm curious what is the energy/environmental/financial impact of this "research" effort of cobbling together a browser based on AI model that had been trained on freely available source code of existing browsers.
I can't imagine this browser being used outside of tinkering or curiosity toy - so the purpose of the research is just to see whether you can run absurd amount of agents simultaneously and produce something that somewhat works?
>I can't imagine this browser being used outside of tinkering or curiosity toy - so the purpose of the research is just to see whether you can run absurd amount of agents simultaneously and produce something that somewhat works?
terabytest|1 month ago
I feel that we continue to miss the forest for the trees. Writing (or generating) a million lines of code in Rust should not count as an achievement in and of itself. What matters is whether those lines build, function as expected (especially in edge cases) and perform decently. As far as I can tell, AI has not been demonstrated to be useful yet at those three things.
mejutoco|1 month ago
Company X does not have a production-ready product, but they have thousands of employees.
I guess it could be a strange flex about funding but in general it would be a bad signal.
azornathogron|1 month ago
I think some of these people need to be reminded of the Bill Gates' quote about lines of code:
“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”
bflesch|1 month ago
ksynwa|1 month ago
signatoremo|1 month ago
To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files [...]
Despite the codebase size, new agents can still understand it and make meaningful progress. Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts.
The point is that the agents can comprehend the huge amount of code generated and continue to meaningfully contribute to the goal of the project. We didn't know if that was possible. They wanted to find out. Now we have a data point.
Also, a popular opinion on any vibecoding discussion is that AI can help, but only on greenfield, toy, personal projects. This experiment shows that AI agents can work together on a very complex codebase with ambitious goals. Looks like there was a human plus 2,000 agents, in two months. How much progress do you think a project with 2,000 engineers can achieve in the first two months?
> What matters is whether those lines build, function as expected (especially in edge cases) and perform decently. As far as I can tell, AI has not been demonstrated to be useful yet at those three things.
They did build. You can give it a try. They did function as expected. How many edge cases would you like it to pass? Perform decently? How could you tell if you didn't try?
embedding-shape|1 month ago
Is the takeaway here that it's fine for a CEO to claim "it even has a custom JS VM!" on Twitter/X, then afterwards the engineer explains: "The JavaScript engine isn’t working yet" and "the agents decided to pause it", and this is all OK? Not a single pushback about this very obvious contradiction? This is just one example of many, and again, since it seems to be repeated: no, no one thinks this was supposed to rival Chrome, what a trite way of trying to change the narrative.
I understand you don't want to spook future potential interviewees, but damn if that didn't feel like you suddenly are trying to defend Cursor here, instead of being curious about what actually happened. It doesn't feel curious, it feels like we're all giving up the fight against unneeded hype, exaggeration and degradation of quality.
What happened with balanced perspectives, where we don't just take people for their words, and when we notice something is off, we bring it up?
On a separate note, I actually emailed Wilson Lin too, asking if I could ask questions about it. While he initially accepted, I never actually received any answers. I'm glad to you were able to get someone from Cursor to clarify a bit at least, even though we're still just scratching the surface. I just wish we had a bit more integrity in the ecosystem and community I guess.
signatoremo|1 month ago
1) The CEO said there was a JS engine, but it didn't work.
2) It didn't build when they published the blog post.
Therefore it lacks integrity! Except that it built (I took Simon's words for it), and building a browser is beside the point, there are a few other big projects listed (Java LSP, Windows 7 emulator, Excel, etc.)
The blog stated:
"Our goal is to understand how far we can push the frontier of agentic coding for projects that typically take human teams months to complete.
This post describes what we've learned from running hundreds of concurrent agents on a single project, coordinating their work, and watching them write over a million lines of code and trillions of tokens."
They didn't set the goal of building a browser. It's an experiment about coordinating AI agents within a context of a complex software project, yet you complained they exaggerating about a JS engine?
The blog post itself is one of the first that describes a large scale experiment of agents, what works, what doesn't. There is very little hype. They didn't say it's game changing or Cursor is the best AI tool.
simonw|1 month ago
I wanted to get to the truth of what had actually been built and how. If that contradicts what the CEO said then great, the truth is now out there - anyone is free to call that out and use my video as ammunition.
I just had a look to see what Michael Truell had said about the project, here it is: https://x.com/mntruell/status/2011562190286045552
> We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week.
> It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.
> It kind of works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly.
This doesn't strike me as the world's most dishonest tweet, though it exaggerates what was achieved. There IS a JS VM in there but it's feature-flagged off. The from-scratch is misleading because there are libraries handling certain aspects - most notably Taffy - which we discussed in the interview.
I just ran "cloc" and to my surprise it counted 3,036,403 (I had thought the 3M was an exaggeration) though only 1,658,651 of that was Rust.
"It kind of works" is a fair assessment IMO!
I don't think "Let's talk about your CEO exaggerating what you built on Twitter" would have added much to the interview.
I did make sure to go over the controversies I thought were material to the project, which is why I dug into the dependencies and talked about QuickJS and Taffy.
sebzim4500|1 month ago
WD-42|1 month ago
thunderbong|1 month ago
> Last week Cursor published Scaling long-running autonomous coding, an article describing their research efforts into coordinating large numbers of autonomous coding agents. One of the projects mentioned in the article was FastRender, a web browser they built from scratch using their agent swarms. I wanted to learn more so I asked Wilson Lin, the engineer behind FastRender, if we could record a conversation about the project. That 47 minute video is now available on YouTube. I’ve included some of the highlights below.
comex|1 month ago
For example:
- They did eventually get it to build. Unknown to me: were the agents working on it able to build it, or were they blindly writing code? The codebase can't have been _that_ broken since it didn't take long for them to get it buildable, and they'd produced demo screenshots before that.
- It had a dependency on QuickJS, but also a homegrown JS implementation; apparently (according to this post) QuickJS was intended as a placeholder. I have no idea which, if either, ended up getting used, though I suspect it may not even matter for the static screenshots they were showing off (the sites may not have required JS to show that).
- Some of the dependencies (like Skia and HarfBuzz) are libraries that other browsers also depend on and are not part of browser projects themselves.
- Other dependencies probably shouldn't have been used, but they only represent a fraction of what a browser has to do.
However…
What I don't know, and seemingly nobody else knows, is how functional the rest of the codebase is. It's apparently very slow and fails to render most websites. But is this more like "lots of bugs, but a solid basis", or is it more like "cargo-culted slop; even the stuff that works only works by chance"? I hope someone investigates.
nurettin|1 month ago
trhway|1 month ago
pooploop64|1 month ago
You've heard about what people are doing in the medical industry. Using AI to accelerate diagnosis and analysis of biological material. In astronomy it's showing us things that no human had ever seen before. You hear about all these things changing the world at large and the smaller worlds of individual people and families.
Then you look at the actual IT industry and we've got... some premade libraries duct taped together into a crappy browser that barely works. Of course when the value of this is compared to the cost, the response is that it's fine because it was never actually intended to be useful in the first place. Well we're actually a step ahead of you there.
The phrase "high on their own supply" describes all the people involved in this very well. I assure you we understand the goal of this project perfectly. It just wasn't a good, worthy, or even interesting goal. The immense amount of resources that went into this should have gone into something better. That's all there is to it.
joduplessis|1 month ago
Ronsenshi|1 month ago
I'm curious what is the energy/environmental/financial impact of this "research" effort of cobbling together a browser based on AI model that had been trained on freely available source code of existing browsers.
I can't imagine this browser being used outside of tinkering or curiosity toy - so the purpose of the research is just to see whether you can run absurd amount of agents simultaneously and produce something that somewhat works?
unknown|1 month ago
[deleted]
sealeck|1 month ago
sebzim4500|1 month ago
Yes but this is a very interesting question IMO
lifis|1 month ago
simonw|1 month ago
yeasku|1 month ago
[deleted]
benatkin|1 month ago
> Any sufficiently complicated AI orchestration system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Gas Town.
danpalmer|1 month ago
polotics|1 month ago