(no title)
falloutx | 19 days ago
You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
cedws|19 days ago
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
rybosworld|19 days ago
I think it goes without saying that they will be writing "good code" in short time.
I also wonder how much of this "I don't trust them yet" viewpoint is coming from people who are using agents the least.
Is it rare that AI one-shots code that I would be willing to raise as a PR with my name on it? Yes, extremely so (almost never).
Can I write a more-specified prompt that improves the AI's output? Also yes. And the amount of time/effort I spend iterating on a prompt, to shape the feature I want, is decreasing as I learn to use the tools better.
I think the term prompt-engineering became loaded to mean "folks who can write very good one-shot prompts". But that's a silly way of thinking about it imo. Any feature with moderate complexity involves discovery. "Prompt iteration" is more descriptive/accurate imo.
dcre|19 days ago
On top of that, the models people use have been heavily shaped by reinforcement learning, which rewards something quite different from the most likely next token. So I don’t think it’s clarifying to say “the model is basically a complex representation of the average of its training data.”
The average thing points to the real phenomenon of underspecified inputs leading to generic outputs, but modern agentic coding tools don’t have this problem the way the chat UIs did because they can take arbitrary input from the codebase.
falloutx|19 days ago
FieryTransition|19 days ago
caminante|19 days ago
Speaking of myths, they were part of a team of 4-5 early contributors with the benefit of network effects via Bell Labs.[0]
[0] https://en.wikipedia.org/wiki/Unix#History
MyHonestOpinon|19 days ago
thunky|19 days ago
Managers are crossing their fingers that devs they hire are no worse than average, and average isn't very good.
jasode|19 days ago
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
misiek08|19 days ago
I - of course - am talking about perfect approach with everyone focused to not f** it up ;)
Lalabadie|19 days ago
prng2021|19 days ago
What is the point of even mentioning this? We live in reality. In reality, there are countless companies with thousands of engineers making each piece of software. Outside of reality, yes you can talk about a million hypothetical situations. Cherry picking rare examples like Winamp does nothing but provide an example of an exception, which yes, also exists in the real world.
caminante|19 days ago
level09|19 days ago
for me AI has been less about building more/fast, and more about unlocking potential that was always out of reach.
Knowledge gaps that would've taken years to fill, new angles I wouldn't have thought to explore on my own. It's not that it makes more software.
it just makes you more capable of tackling things you couldn't before.
phendrenad2|19 days ago
pousada|19 days ago
So everything stays exactly the same?
falloutx|19 days ago
gosub100|19 days ago
Chance-Device|19 days ago
co_king_3|19 days ago
No, we get applications so hideously inefficient that your $3000 developer machine feels like it it's running a Pentium II with 256 MB of RAM.
We get software that's as slow as it was 30 years ago, for no reason other than our own arrogance and apathy.
zsoltkacsandi|19 days ago
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
9rx|19 days ago
To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can sometimes upset users who have come to depend on the bug as a feature.
> In agile methodologies we measure the output of the developers.
No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?
There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.
graemep|19 days ago
The people making the buying decisions may not have a good idea of what maximises "meaningful value" but they compare feature sets.
antupis|19 days ago
sdf2erf|19 days ago
The former is always preferred in the context of product development but poses a key person risk. Apple in its current form is a representation of this - Steve did enough work to keep the company going for a decade after his death. Now its sort-of lost on where to go next. But on the flip side look at its market cap today vs 2000.
imiric|19 days ago
I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?
Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.
/s
kvgr|19 days ago
falloutx|19 days ago
Consider two scenarios:
1) I try to build an interpreter. I go and read some books, understand the process, build it in 2 weeks. Results: I have a toy interpreter. I understand said toy interpreter. I learnt how to do it, Learnt ideas in the field, applied my knowledge practically.
2) I try to build an interpreter. I go and ask claude to do it. It spits out something which works: Result: I have black box interpreter. I dont understand said interpreter. I didnt build any skills in building it. Took me less than an hour.
Toy interpreter is useless in both scenarios but Scenario one pay for the 2 week effort, while Scenario 2 is a vanity project.
lietuvis|19 days ago
Also using double equals to mutate variables, why?
oblio|19 days ago
Now comes the hard or impossible part: is it any good? I would bet against it.