top | item 46958110

(no title)

falloutx | 19 days ago

I am becoming more and more convinced that AI cant be used to make something better than what could have built before AI.

You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.

percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.

discuss

order

cedws|19 days ago

I’m finding that the code LLMs produce is just average. Not great, not terrible. Which makes sense, the model is basically a complex representation of the average of its training data right? If I want what I consider ‘good code’ I have to steer it.

So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’

rybosworld|19 days ago

We've gone from "it's glorified auto-complete" to "the quality of working, end-to-end features, is average", in just ~2 years.

I think it goes without saying that they will be writing "good code" in short time.

I also wonder how much of this "I don't trust them yet" viewpoint is coming from people who are using agents the least.

Is it rare that AI one-shots code that I would be willing to raise as a PR with my name on it? Yes, extremely so (almost never).

Can I write a more-specified prompt that improves the AI's output? Also yes. And the amount of time/effort I spend iterating on a prompt, to shape the feature I want, is decreasing as I learn to use the tools better.

I think the term prompt-engineering became loaded to mean "folks who can write very good one-shot prompts". But that's a silly way of thinking about it imo. Any feature with moderate complexity involves discovery. "Prompt iteration" is more descriptive/accurate imo.

dcre|19 days ago

People often describe the models as averaging their training data, but even for base models predicting the most likely next token this is imprecise and even misleading, because what is most likely is conditional on the input as well as what has been generated so far. So a strange input will produce a strange output — hardly an average or a reversion to the mean.

On top of that, the models people use have been heavily shaped by reinforcement learning, which rewards something quite different from the most likely next token. So I don’t think it’s clarifying to say “the model is basically a complex representation of the average of its training data.”

The average thing points to the real phenomenon of underspecified inputs leading to generic outputs, but modern agentic coding tools don’t have this problem the way the chat UIs did because they can take arbitrary input from the codebase.

falloutx|19 days ago

I was literally creating a 2nd account on github today for this purpose.

FieryTransition|19 days ago

And Unix was mainly made by two people, it's astounding that as I get older, even tech managers don't know "the mythical man month", and how software production generally scales.

MyHonestOpinon|19 days ago

I do agree with this idea in the sense that companies keep trying to add people to projects to do more things or complete projects sooner which ends up wasting a lot of effort. A more cost conscious way is to have smaller teams and let them more time to explore better approaches for longer.

thunky|19 days ago

Sorry but 99.999% of developers could not have built Unix. Or Winamp.

Managers are crossing their fingers that devs they hire are no worse than average, and average isn't very good.

jasode|19 days ago

, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product.

Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.

Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.

All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.

misiek08|19 days ago

I will use my right to disagree. Maybe not 4 people everywhere, but if you have product with well thought feature set you create those and then you really don't need 1000s people to just keep it alive and add features one by one.

I - of course - am talking about perfect approach with everyone focused to not f** it up ;)

Lalabadie|19 days ago

But big projects are where the quality of LLM contributions fall the most, and require (continuous, exhausting, thankless) supervision!

prng2021|19 days ago

“You never needed 1000s of engineers to build software anyway”

What is the point of even mentioning this? We live in reality. In reality, there are countless companies with thousands of engineers making each piece of software. Outside of reality, yes you can talk about a million hypothetical situations. Cherry picking rare examples like Winamp does nothing but provide an example of an exception, which yes, also exists in the real world.

caminante|19 days ago

IMHO, it's flamebait. Your quoted text is provably false -- e.g., MSFT Windows, AWS, etc. and appeals to idealism of lean project teams.

level09|19 days ago

I see it differently.

for me AI has been less about building more/fast, and more about unlocking potential that was always out of reach.

Knowledge gaps that would've taken years to fill, new angles I wouldn't have thought to explore on my own. It's not that it makes more software.

it just makes you more capable of tackling things you couldn't before.

phendrenad2|19 days ago

You need 1000 engineers because you have poor engineering leadership, or no engineering leadership, and engineering is a black hole that management shovels money into where it falls directly onto a huge plane of middle managers who do the best they can with their limited power and understanding. Meanwhile your sales team is writing specifications for the next version of the product, which they already promised to customers, and they hired an outside consultant to transform it into 500 spec documents written in damn near legalese, which will appear one day on the lead engineer's desk with no foreshadowing. It turns out that throwing more engineers at the problem helps here because you'll run out of tasks to assign to all of them and some will roam the halls and accidentally connect distributed knowledge back together.

pousada|19 days ago

> percentage of good, well planned, consistent and coherent software is going to approach zero

So everything stays exactly the same?

falloutx|19 days ago

I get this comment everytime I say this but there are levels to this. What you think is bad today could be considered artisan when things become worse than today.

gosub100|19 days ago

No, wealth gets more concentrated. Fewer people on the team will be able to afford a comfortable lifestyle and save for retirement. More will edge infinitesimally closer to "barely scraping by".

Chance-Device|19 days ago

Underrated comment. The reason that everyone complains about code all the time is because most code is bad, and it’s written by humans. I think this can only be a step up. Nailing validation is the trick now.

co_king_3|19 days ago

> So everything stays exactly the same?

No, we get applications so hideously inefficient that your $3000 developer machine feels like it it's running a Pentium II with 256 MB of RAM.

We get software that's as slow as it was 30 years ago, for no reason other than our own arrogance and apathy.

zsoltkacsandi|19 days ago

Completely agree. There is a common misunderstanding/misconception in product development, that more features = better product.

I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?

In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.

9rx|19 days ago

> I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?

To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can sometimes upset users who have come to depend on the bug as a feature.

> In agile methodologies we measure the output of the developers.

No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?

There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.

graemep|19 days ago

Its also about marketing. People buy because of features.

The people making the buying decisions may not have a good idea of what maximises "meaningful value" but they compare feature sets.

antupis|19 days ago

It’s more about operational resilience and serving customers than product development. If you run early WhatsApp like organisation just 1 person leaving can create awful problems. Same for serving customers especially big clients need all kinds of reports and resources that skeleton organisation can not provide.

sdf2erf|19 days ago

What youre pointing at is the trade off between concentration of understanding vs fragmented understanding across more people.

The former is always preferred in the context of product development but poses a key person risk. Apple in its current form is a representation of this - Steve did enough work to keep the company going for a decade after his death. Now its sort-of lost on where to go next. But on the flip side look at its market cap today vs 2000.

imiric|19 days ago

Wait, surely adding 10x more agents to my project will speed up development, improve the end product, and make me more productive by that same proportion, right?

I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?

Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.

/s

kvgr|19 days ago

I just build a programming language in couple of hours, complete with interpreter with claude code. I know nothing about designing and implementing programming languages: https://github.com/m-o/MoonShot.

falloutx|19 days ago

Yes, my point is that it was possible to build it before AI and in much less effort than people imagine. People in college build an interpreter in the less than couple weeks anyway and that probably has more utility.

Consider two scenarios:

1) I try to build an interpreter. I go and read some books, understand the process, build it in 2 weeks. Results: I have a toy interpreter. I understand said toy interpreter. I learnt how to do it, Learnt ideas in the field, applied my knowledge practically.

2) I try to build an interpreter. I go and ask claude to do it. It spits out something which works: Result: I have black box interpreter. I dont understand said interpreter. I didnt build any skills in building it. Took me less than an hour.

Toy interpreter is useless in both scenarios but Scenario one pay for the 2 week effort, while Scenario 2 is a vanity project.

lietuvis|19 days ago

Took a quick look, this seems like a copy of writing an interpreter in go book by Thorsten Ball, but just much worse.

Also using double equals to mutate variables, why?

oblio|19 days ago

You built something.

Now comes the hard or impossible part: is it any good? I would bet against it.