The headline as stated is categorically false, buuuut... I think it's pretty salient that a company called "Builder.ai" only had 15 engineers working on actual ai and actually mostly functioned as an outsourcing intermediary for 500-1000 engineers (ie, the builders). When it comes to these viral misunderstandings, you kind of reap what you sow.
The AI engineers were based in the UK and from what I've seen on LinkedIn many came from top unis. They're probably worth a 100x more than the builders. Not to mention their boss, who was a well know AI figure = $$$
> Builder hired 300 internal engineers and kicked off building internal tools, all of which could have simply been purchased
Dear god, PLEASE hire an actual Enterprise IT professional early in your startup expansion phase. A single competent EIT person (or dinosaur like me) could have - if this story is true - possibly saved the whole startup by understanding what’s immediately needed versus what’s nice-to-have, what should be self-hosted versus what should be XaaS, stitching everything together to reduce silos, and ensuring every cent is not just accounted for but wisely invested in future success.
Even if the rest of your startup isn’t “worrying about the money”, your IT and Finance people should always be worried about the money.
A similar thing happened at Uber before the 2021 re-org. At one point they had 3 competing internal chat apps from what I've heard from peers working there, and having previously worked for a vendor of Uber's, I noticed a significant amount of disjointedness in their environment (seemed very disjointed EM driven with no overarching product vision).
Ofc, Gergely might have some thoughts about that ;)
Hmm, this is the reasoning of a kid: "You hire devs instead of using AI, therefore you are corrupt". More conspiracy theories. Based on what the article says, it was a dev shop like Infosys or any other Indian dev company, they were working on hundreds of projects.
I interviewed there a few years back. I bailed on the interview within the first fifteen minutes, the first time I’ve ever done that. I told them they’d given me the ick — not my most professional moment, admittedly! But it was an awkward and unpleasant interview.
They spent the first ten minutes of the call predicting the death of software engineering (this was a software engineering interview) and complaining about expensive devs (ahem). I wouldn’t have minded so much if the only demo apps they had on their website weren’t some of the worst, non-native iOS apps I’ve ever seen. Truly awful.
A month or two later I noticed on LinkedIn that a dodgy CTO I’d worked with, who had attempted to avoid paying me (and did avoid paying several colleagues of mine), had joined there too. It felt like a good fit.
Yeah, I have to say, none of this is a surprise to me.
My assumption when the story broke was that the 700 engineers were using various AI tools (Replit, Cursor, ChatGPT, etc.) to create code and documentation and then stitching it all together somewhat manually. Sort of like that original Devin demo where AI was being used at each step but there was a ton of manual intervention along the way and the final video was edited to make it seem as if the whole thing ran end to end fully automated all from the initial prompt.
Builder.ai had a totally different flow, but yeah, when boring stories and exciting ones compete to tell the same story, a very large percentage will run with the exciting story. It’s like death tax in US political history - the US has never had a death tax but it’s way more exciting to call it a death tax than an estate tax. Only now, instead of media being the primary disseminator of spin, we have people sharing exciting stories on social media instead of boring stories about building internal zoom and accounting issues.
Then social animals kick in, likes pour in and more people share. Social media has created a world where an exciting lie can drown out boring truth for a large percentage of people.
I worked with an "AI data vendor" at work where you'd put in a query and "the AI gave you back a dataset" but it usually took 24hrs, so it was obvious they had humans pulling the data. The company still purchased a data plan. It happens, in this case, they have a unique dataset, though.
I don’t find this article particularly convincing. The main argument seems to be “It couldn’t be real engineers, it would be too slow” but I have no clue what the Builder.ai interface or response times were like. Also it says 10-20min would be too long… kind of? Not really though? Depends on the output. Claude Code has run for quite a while on its own before (I’ve never timed it) but 5-10+min doesn’t shock me. Yes, Claude is giving real-time output but I’ve seen a number of dev tools that don’t (or didn’t, this area is moving fast).
Also, re: hiring outsourced contractors
> However, we didn't anticipate the significant fraud that would ensue
First time? Every experience I have personally had with outsourced contractors has been horrible. Bad code quality, high billing hours for low output, language and time barriers, the list goes on. I’m quick to flip the bozo bit on anyone pushing for outsourcing, engineers are not just cogs in a machine to start with and outsourced contractors are almost less useful than current LLM coding tools IMHO. If you already have to explain things in excruciating detail, you might as well talk to an LLM.
People really want this black box that they can feed money and input into and have full-fledged applications and platforms pop out the other side. It doesn’t exist. I have only seen failures with outsourcing on this front and so far LLMs haven’t been able to do it either. Don’t get me wrong LLM’s are actually useful in my opinion, just not for writing all the code unsupervised or “vibe coding”.
It's common sense: you either believe what you read on Bernhard Engelbrecht's Twitter account (this is the same crypto influencer who scammed startup founders out of thousands of dollars) or you trust what's published on The Pragmatic Engineer blog, who actually read Bernhard's tweet and spoke to the people who built the tech.
> I don't find this article particularly convincing
I think you missed the point of the article. It's saying: this is what the conspiracy theorists want me to believe but it doesn't add up, so I'm going to pick up the phone and call the people who built it.
At that point, it's engineers talking to engineers. And the post is the outcome of that conversation.
I always knew this story was fake. Even if you have a trillion expert developers it would still be impossible to get fast enough answers to "Fake an LLM". Humans obviously aren't _parallelizable_ like that.
The accusation I heard was these developers they hired were essentially prompt wrangling behind the scenes using other AI services to make Builder seem better than it really was.
The original story doesn't make any sense. How would you fake an "AI" agent coding by using people on the other side? Woudn't it be...obvious? People cannot type code that fast.
What's your non-snarky theory about how this could possibly be true?
I tend to trust Gergely Orosz (the writer of Pragmatic Engineer). He validate sources and has a good track record on reporting on the European tech scene and Engineering Management.
His blog and newsletter are both fairly popular on HN.
The deep seated hate for Indians (and among them, Hindus) had been going on unchecked in the West for many hundred years. That's precisely why such fake-news go viral so quickly.
Hell, when the woke "bleeding-heart" academics are the leading voices behind this hate festival, you know there's something deeply wrong.
I was so shocked by the things "South-Asia depts." do in the US that it's hard not to to consider them to be in the same bag as the medieval religious nuts, pagan-hunting padre "saints" & "race-science pioneers".
I don’t believe that their business entirely depended on 700 actual humans, just as much as I don’t believe that to be true for the Amazon store. However, both probably relied on humans in the loop which is not sustainable at scale.
LLMs are all fake AI. As the recently released Apple study demonstrates, LLMs don't reason, they just pattern match. That's not "intelligence" however you define it because they can only solve things that are already within their training set.
In this case, it would have been better for the AI industry if it had been 700 programmers, because then the rest of the industry could have argued that the utter trash code Builder.ai generated was the result of human coders spending a few minutes haphazardly typing out random code, and not the result of a specialty-trained LLM.
> because they can only solve things that are already within their training set
I just gave up on using SwiftUI for a rewrite of a backend dashboard tool.
The LLM didn't give up. It kept suggesting wilder, and less stable ideas, until I realized that this was a rabbithole full of misery, and went back to UIKit.
It wasn't the LLM's fault. SwiftUI just isn't ready for the particular functionality I needed, and I guess that a day of watching ChatGPT get more and more desperate, saved me a lot of time.
But the LLM didn't give up, which is maybe ot-nay oo-tay ight-bray.
>As the recently released Apple study demonstrates, LLMs don't reason, they just pattern match
Hold on a minute I was under the impression that "reasoning" was just marketing buzzword the same as "hallucinations", because how tf anyone expected GPUs to "reason" and "hallucinate" when even neurology/psychology don't have a strict definition of those processes.
> As the recently released Apple study demonstrates, LLMs don't reason
Where is everyone getting this misconception? I have seen it several times. First off, the study doesn't even try to qualify whether or not these models use "actual reasoning" - that's outside of the scope. They merely examine how effective thinking/reasoning _is_ at producing better results. They found that - indeed - reasoning improves performance. But the crucial result is that it only improves performance up to a certain difficulty-cliff - at which point thinking makes no discernable difference due to a model collapse of sorts.
It's important to read the papers you're using to champion your personal biases.
> because they can only solve things that are already within their training set.
That is just plain wrong, as anybody who spent more than 10 minutes with a LLM within the last 3 years can attest. Give it a try, especially if you care to have an opinion on them. Ask an absurd question (that can be, in principle, answered) that nobody has asked before and see how it performs generalizing. The hype is real.
I'm interested what study you refer to. Because I'm interested in their methods and what they actually found out.
> As the recently released Apple study demonstrates
The Apple study that did Towers of Hanoi and concluded that giving up when the answers would have been too long to fit in the output window was a sign of "not reasoning"?
[+] [-] Fraterkes|9 months ago|reply
[+] [-] poisonwomb|9 months ago|reply
[+] [-] pyman|9 months ago|reply
[+] [-] stego-tech|9 months ago|reply
Dear god, PLEASE hire an actual Enterprise IT professional early in your startup expansion phase. A single competent EIT person (or dinosaur like me) could have - if this story is true - possibly saved the whole startup by understanding what’s immediately needed versus what’s nice-to-have, what should be self-hosted versus what should be XaaS, stitching everything together to reduce silos, and ensuring every cent is not just accounted for but wisely invested in future success.
Even if the rest of your startup isn’t “worrying about the money”, your IT and Finance people should always be worried about the money.
[+] [-] tanelpoder|9 months ago|reply
[+] [-] unknown|9 months ago|reply
[deleted]
[+] [-] alerter|9 months ago|reply
Tempted to say there was a bit of corruption here, crazy decision. Like someone had connections to the contractor providing all those devs.
otoh they were an "app builder" company. Maybe they really wanted to dogfood.
[+] [-] alephnerd|9 months ago|reply
Ofc, Gergely might have some thoughts about that ;)
[+] [-] pyman|9 months ago|reply
[+] [-] mellosouls|9 months ago|reply
https://news.ycombinator.com/item?id=44169759
(Builder.ai Collapses: $1.5B 'AI' Startup Exposed as 'Indians'?, 367 points, 267 comments)
[+] [-] pyman|9 months ago|reply
- Proven: BuilderAI collapsed after fabricating revenue.
- Unsubstantiated: The rumour that 700 devs were the chatbot is false, not backed by evidence or insiders.
- Marketing vs. reality: They marketed features as "AI-assisted", not AI-generated, two very different things.
- Bottom line: The real scandal is financial fraud, not some fake-AI front.
[+] [-] sandcat_|9 months ago|reply
They spent the first ten minutes of the call predicting the death of software engineering (this was a software engineering interview) and complaining about expensive devs (ahem). I wouldn’t have minded so much if the only demo apps they had on their website weren’t some of the worst, non-native iOS apps I’ve ever seen. Truly awful.
A month or two later I noticed on LinkedIn that a dodgy CTO I’d worked with, who had attempted to avoid paying me (and did avoid paying several colleagues of mine), had joined there too. It felt like a good fit.
Yeah, I have to say, none of this is a surprise to me.
[+] [-] pyman|9 months ago|reply
[deleted]
[+] [-] wnevets|9 months ago|reply
[+] [-] DebtDeflation|9 months ago|reply
[+] [-] hluska|9 months ago|reply
Then social animals kick in, likes pour in and more people share. Social media has created a world where an exciting lie can drown out boring truth for a large percentage of people.
[+] [-] tomasphan|9 months ago|reply
[+] [-] TuringNYC|9 months ago|reply
[+] [-] dd_xplore|9 months ago|reply
[+] [-] apwell23|9 months ago|reply
[+] [-] TiredOfLife|9 months ago|reply
[+] [-] joshstrange|9 months ago|reply
Also, re: hiring outsourced contractors
> However, we didn't anticipate the significant fraud that would ensue
First time? Every experience I have personally had with outsourced contractors has been horrible. Bad code quality, high billing hours for low output, language and time barriers, the list goes on. I’m quick to flip the bozo bit on anyone pushing for outsourcing, engineers are not just cogs in a machine to start with and outsourced contractors are almost less useful than current LLM coding tools IMHO. If you already have to explain things in excruciating detail, you might as well talk to an LLM.
People really want this black box that they can feed money and input into and have full-fledged applications and platforms pop out the other side. It doesn’t exist. I have only seen failures with outsourcing on this front and so far LLMs haven’t been able to do it either. Don’t get me wrong LLM’s are actually useful in my opinion, just not for writing all the code unsupervised or “vibe coding”.
[+] [-] pyman|9 months ago|reply
> I don't find this article particularly convincing
I think you missed the point of the article. It's saying: this is what the conspiracy theorists want me to believe but it doesn't add up, so I'm going to pick up the phone and call the people who built it.
At that point, it's engineers talking to engineers. And the post is the outcome of that conversation.
[+] [-] troysk|9 months ago|reply
https://analyticsindiamag.com/ai-features/sachin-duggal-spea...
https://www.wsj.com/articles/ai-startup-boom-raises-question...
https://techcrunch.com/2019/11/25/engineer-ai-launches-its-b...
[+] [-] quantadev|9 months ago|reply
[+] [-] weare138|9 months ago|reply
[+] [-] unknown|9 months ago|reply
[deleted]
[+] [-] pinoy420|9 months ago|reply
[deleted]
[+] [-] firesteelrain|9 months ago|reply
Did they really do this or customize Jira schemas and workflows for example ?
[+] [-] cratermoon|9 months ago|reply
[+] [-] senko|9 months ago|reply
The "700 engineers faking AI" claim seems to have been sloppy[0] reasoning by an influencer, which spread like wildfire.
[0] I won't attribute malice here, but this version was certainly more interesting than the truth
[+] [-] mediaman|9 months ago|reply
What's your non-snarky theory about how this could possibly be true?
[+] [-] alephnerd|9 months ago|reply
His blog and newsletter are both fairly popular on HN.
[+] [-] thru47fhbghb|9 months ago|reply
Hell, when the woke "bleeding-heart" academics are the leading voices behind this hate festival, you know there's something deeply wrong.
I was so shocked by the things "South-Asia depts." do in the US that it's hard not to to consider them to be in the same bag as the medieval religious nuts, pagan-hunting padre "saints" & "race-science pioneers".
[+] [-] tomasphan|9 months ago|reply
[+] [-] Legend2440|9 months ago|reply
[+] [-] fragmede|9 months ago|reply
[+] [-] gamblor956|9 months ago|reply
In this case, it would have been better for the AI industry if it had been 700 programmers, because then the rest of the industry could have argued that the utter trash code Builder.ai generated was the result of human coders spending a few minutes haphazardly typing out random code, and not the result of a specialty-trained LLM.
[+] [-] ChrisMarshallNY|9 months ago|reply
I just gave up on using SwiftUI for a rewrite of a backend dashboard tool.
The LLM didn't give up. It kept suggesting wilder, and less stable ideas, until I realized that this was a rabbithole full of misery, and went back to UIKit.
It wasn't the LLM's fault. SwiftUI just isn't ready for the particular functionality I needed, and I guess that a day of watching ChatGPT get more and more desperate, saved me a lot of time.
But the LLM didn't give up, which is maybe ot-nay oo-tay ight-bray.
https://despair.com/cdn/shop/files/stupidity.jpg
[+] [-] aeve890|9 months ago|reply
Hold on a minute I was under the impression that "reasoning" was just marketing buzzword the same as "hallucinations", because how tf anyone expected GPUs to "reason" and "hallucinate" when even neurology/psychology don't have a strict definition of those processes.
[+] [-] meowface|9 months ago|reply
(The Apple paper has had many serious holes poked in it.)
[+] [-] throwaway314155|9 months ago|reply
Where is everyone getting this misconception? I have seen it several times. First off, the study doesn't even try to qualify whether or not these models use "actual reasoning" - that's outside of the scope. They merely examine how effective thinking/reasoning _is_ at producing better results. They found that - indeed - reasoning improves performance. But the crucial result is that it only improves performance up to a certain difficulty-cliff - at which point thinking makes no discernable difference due to a model collapse of sorts.
It's important to read the papers you're using to champion your personal biases.
[+] [-] UebVar|9 months ago|reply
That is just plain wrong, as anybody who spent more than 10 minutes with a LLM within the last 3 years can attest. Give it a try, especially if you care to have an opinion on them. Ask an absurd question (that can be, in principle, answered) that nobody has asked before and see how it performs generalizing. The hype is real.
I'm interested what study you refer to. Because I'm interested in their methods and what they actually found out.
[+] [-] simonw|9 months ago|reply
The "AI isn't really intelligence" argument is so tired now it has a whole Wikipedia page about it: https://en.m.wikipedia.org/wiki/AI_effect
[+] [-] unknown|9 months ago|reply
[deleted]
[+] [-] ben_w|9 months ago|reply
The Apple study that did Towers of Hanoi and concluded that giving up when the answers would have been too long to fit in the output window was a sign of "not reasoning"?
https://xcancel.com/scaling01/status/1931783050511126954
I mean, on that basis, anyone who ever went "TL;DR" is also demonstrating that humans don't reason.
> That's not "intelligence" however you define it because they can only solve things that are already within their training set.
This is proven untrue by, amongst other things, looking at them playing chess. They can and do play moves not found in the training data: https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gp...