top | item 47082336

(no title)

alphazard | 10 days ago

There's an undertone of self-soothing "AI will leverage me, not replace me", which I don't agree with especially in the long run, at least in software. In the end it will be the users sculpting formal systems like playdoh.

In the medium run, "AI is not a co-worker" is exactly right. The idea of a co-worker will go away. Human collaboration on software is fundamentally inefficient. We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people. Software is going to become an individual sport, not a team sport, quickly. The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI. I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.

discuss

order

GuB-42|9 days ago

> In the end it will be the users sculpting formal systems like playdoh.

And unless the user is a competent programmer, at least in spirit, it will look like the creation of the 3-year-old next door, not like Wallace and Gromit.

It may be fine, but the difference is that one is only loved by their parents, the other gets millions of people to go to the theater.

Play-Doh gave the power of sculpting to everyone, including small children, but if you don't want to make an ugly mess, you have to be a competent sculptor to begin with, and it involves some fundamentals that does not depend on the material. There is a reason why clay animators are skilled professionals.

The quality of vibe coded software is generally proportional to the programming skills of the vibe coder as well as the effort put into it, like with all software.

loudmax|9 days ago

It really depends what kind of time frame we're talking about.

As far as today's models, these are best understood as tools to be used as humans. They're only replacements for humans insofar as individual developers can accomplish more with the help of an AI than they could alone, so a smaller team can accomplish what used to require a bigger team. Due to Jevon's paradox this is probably a good thing for developer salaries: their skills are now that much more in demand.

But you have to consider the trajectory we're on. GPT went from an interesting curiosity to absolutely groundbreaking in less than five years. What will the next five years bring? Do you expect development to speed up, slow down, stay the course, or go off in an entirely different direction?

Obviously, the correct answer to that question is "Nobody knows for sure." We could be approaching the top of a sigmoid type curve where progress slows down after all the easy parts are worked out. Or maybe we're just approaching the base of the real inflection point where all white collar work can be accomplished better and more cheaply by a pile of GPUs.

Since the future is uncertain, a reasonable course of action is probably to keep your own coding skills up to date, but also get comfortable leveraging AI and learning its (current) strengths and weaknesses.

yieldcrv|9 days ago

so agentic play-doh sculpting

challenge accepted

Tade0|9 days ago

> The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI.

Not this generation of AI though. It's a text predictor, not a logic engine - it can't find actual flaws in your code, it's just really good at saying things which sound plausible.

xnorswap|9 days ago

> it can't find actual flaws in your code

I can tell from this statement that you don't have experience with claude-code.

It might just be a "text predictor" but in the real world it can take a messy log file, and from that navigate and fix issues in source.

It can appear to reason about root causes and issues with sequencing and logic.

That might not be what is actually happening at a technical level, but it is indistinguishable from actual reasoning, and produces real world fixes.

weego|9 days ago

And not this or any existing generation of people. We're bad a determining want vs need, being specific, genericizing our goals into a conceptual framework of existing patterns and documenting & explaining things in a way that gets to a solid goal.

The idea that the entire top down processes of a business can be typed into an AI model and out comes a result is again, a specific type of tech person ideology that sees the idea of humanity as an unfortunate annoyance in the process of delivering a business. The rest of the world see's it the other way round.

afro88|9 days ago

I would have agreed with you a year ago

lpapez|9 days ago

If you only realized how ridiculous your statement is, you never would have stated it.

p-e-w|9 days ago

You’re committing the classic fallacy of confusing mechanics with capabilities. Brains are just electrons and chemicals moving through neural circuits. You can’t infer constraints on high-level abilities from that.

nazgul17|9 days ago

While I agree, if you think that AI is just a text predictor, you are missing an important point.

Intelligence, can be borne of simple targets, like next token predictor. Predicting the next token with the accuracy it takes to answer some of the questions these models can answer, requires complex "mental" models.

Dismissing it just because its algorithm is next token prediction instead of "strengthen whatever circuit lights up", is missing the forest for the trees.

laichzeit0|9 days ago

Absolutely nuts, I feel like I'm living in a parallel universe. I could list several anecdotes here where Claude has solved issues for me in an autonomous way that (for someone with 17 years of software development, from embedded devices to enterprise software) would have taken me hours if not days.

To the nay sayers... good luck. No group of people's opinions matter at all. The market will decide.

jatora|9 days ago

[deleted]

ACCount37|9 days ago

Your brain is a slab of wet meat, not a logic engine. It can't find actual flaws in your code - it's just half-decent at pattern recognition.

paulryanrogers|10 days ago

This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift.

And that there is little value in reusing software initiated by others.

alphazard|10 days ago

> This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift.

I think there are people who want to use software to accomplish a goal, and there are people who are forced to use software. The people who only use software because the world around them has forced it on them, either through work or friends, are probably cognitively excluded from building software.

The people who seek out software to solve a problem (I think this is most people) and compare alternatives to see which one matches their mental model will be able to skip all that and just build the software they have in mind using AI.

> And that there is little value in reusing software initiated by others.

I think engineers greatly over-estimate the value of code reuse. Trying to fit a round peg in a square hole produces more problems than it solves. A sign of an elite engineer is knowing when to just copy something and change it as needed rather than call into it. Or to re-implement something because the library that does it is a bad fit.

The only time reuse really matters is in network protocols. Communication requires that both sides have a shared understanding.

calvinmorrison|10 days ago

no but if the old '10x developer' is really 1 in 10 or 1 in 100, they might just do fine while the rest of us, average PHP enjoyers, may go to the wayside

Thanemate|9 days ago

>This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift.

It's true that at first not everyone is just as efficient, but I'd be lying if I were to claim that someone needs a 4-year degree to communicate with LLM's.

Gud|9 days ago

I love this optimistic take.

Unfortunately, I believe the following will happen: By positioning themselves close to law makers, the AI companies will in the near future declare ownership of all software code developed using their software.

They will slowly erode their terms of service, as happens to most internet software, step by step, until they claim total ownership.

The point is to license the code.

theshrike79|9 days ago

> AI companies will in the near future declare ownership of all software code developed using their software.

(X) Doubt

Copyright law is WEEEEEEIRRRDD and our in-house lawyer is very much into that, personally and professionally. An example they gave us during a presentation:

A monkey took a selfie of itself in 2011. We still don't know who has the copyright to that image: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

IIRC the latest resolution is "it's not the monkey", but nobody has ruled the photographer has copyright either. =)

Copyright law has this thing called "human authorship" that's required to apply copyright to a work. Animals and machines can't have a copyright to anything.

A second example: https://en.wikipedia.org/wiki/Zarya_of_the_Dawn

A comic generated with Midjourney had its copyright revoked when it was discovered all of the art was done with Generative AI.

AI companies have absolutely mindboggling amounts of money, but removing the human authorship requirement from copyright is beyond even them in my non-lawyer opinion. It would bring the whole system crashing down and not in a fun way for anyone.

alwillis|9 days ago

> the AI companies will in the near future declare ownership of all software code developed using their software.

Pretty sure this isn’t going to happen. AI is driving the cost of software to zero; it’s not worth licensing something that’s a commodity.

It’s similar to 3D printing companies. They don’t have IP claims on the items created with their printers.

The AI companies currently don’t have IP claims on what their agents create.

Uncle Joe won’t need to pay OpenAI for the solitaire game their AI made for him.

The open source models are quite capable; in the near future there won’t be a meaningful difference for the average person between a frontier model and an open source one for most uses including creating software.

overgard|9 days ago

AFAIK you can't copyright AI generated content. I don't know where that gets blurry when it's mixed in with your own content (ie, how much do you need to modify it to own it), but I think that by that definition these companies couldn't claim your code at all. Also, with the lawsuit that happened to Anthropic where they had to pay billions for ingesting copyrighted content, it might actually end up working the other way around.

thewebguyd|9 days ago

> In the end it will be the users sculpting formal systems like playdoh.

I’m very skeptical of this unless the AI can manage to read and predict emotion and intent based off vague natural language. Otherwise you get the classic software problem of “What the user asked for directly isn’t actually what they want/need.”

You will still need at least some experience with developing software to actually get anything useful. The average “user” isn’t going to have much success for large projects or translating business logic into software use cases.

thwarted|10 days ago

> We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.

Something Brooks wrote about 50 years ago, and the industry has never fully acknowledged. Throw more bodies at it, be they human bodies or bot agent bodies.

quietbritishjim|9 days ago

The point of the mythical man month is not that more people are necessarily worse for a project, it's just that adding them at the last minute doesn't work, because they take a while to get up to speed and existing project members are distracted while trying to help them.

It's true that a larger team, formed well in advance, is also less efficient per person, but they still can achieve more overall than small teams (sometimes).

falcor84|10 days ago

But there is a level of magnitude difference between coordinating AI agents and humans - the AIs are so much faster and more consistent than humans, that you can (as Steve Yegge [0] and Nicholas Carlini [1] showed) have them build a massive project from scratch in a matter of hours and days rather than months and years. The coordination cost is so much lower that it's just a different ball game.

[0] https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

[1] https://www.anthropic.com/engineering/building-c-compiler

chunkmonke99|6 days ago

I don't understand this line of reasoning. Like genuinely. So with AI coding (let's just limit ourselves to coding); are you saying that the Agent is going to prompt itself? Like it exists only to read your mind and create precisely the code you wanted or didn't even know you wanted? Or will you have to explain and verify that it did what you asked? At some point we run into magical thinking and absurdities.

Programming or math are not like Chess or Go. There is no endgame to win. And the human/input/judgement/whatever and where that begins or ends isn't a technical issue but a political one.

So my question: are you expecting that at some time N that models are so good that they can read your mind? Or are you saying that you will just be able to "speak" into existence any type of software? And how are you going to specify this if you can't already point to something similar?

overgard|10 days ago

Well, without the self soothing I think what's left is pitchforks.

capital_guy|9 days ago

Maybe it's time for pitchforks.

mossTechnician|9 days ago

Everybody in the world is now a programmer. This is the miracle of artificial intelligence.

- Jensen Huang, February 2024

https://www.techradar.com/pro/nvidia-ceo-predicts-the-death-...

codr7|9 days ago

God help us!

Far from everyone are cut out to be programmers, the technical barrier was a feature if anything.

There's a kind of mental discipline and ability to think long thoughts, to deal with uncertainty; that's just not for everyone.

What I see is mostly everyone and their gramps drooling at the idea of faking their way to fame and fortune. Which is never going to work, because everyone is regurgitating the same mindless crap.

koonsolo|9 days ago

The problem I mostly see with non programmers is that they don't really grasp the concept of a consistent system.

A lot of people want X, but they also want Y, while clearly X and Y cannot coexist in the same system.

overgard|9 days ago

Remember when Visual Basic was making everyone a programmer too?

(btw, warm fuzzies for VB since that's what I learned on! But ultimately, those VB tools business people were making were:

1) Useful, actually!

2) Didn't replace professional software. Usually it'd hit a point where if it needed to evolve past its initial functionality it probably required an actual software developer. (IE, not using Access as a database and all the other eccentricities of VB apps at that time)

toss1|9 days ago

This looks like the same problem as when the first page layout software came out.

It looked to everyone like a huge leap into a new world word processing applications could basically move around blocks of text to be output later, maybe with a few font tags, then this software came out that wow actually showed the different fonts, sizes, and colors on the screen as you worked! With apps like "Pagemaker" everyone would become their own page designers!

It turned out that everyone just turned out floods of massively ugly documents and marketing pieces that looked like ransom notes pasted together from bits of magazines. Years of awfulness.

The same is happening now as we are doomed to endure years AI slop in everything from writing to apps to products to vending machines an entire companies — everyone and their cousin is trying to fully automate it.

Ultimately it does create an advance and allows more and better work to be done, but only for people who have a clue about what they are doing, and eventually things settle at a higher level where the experts in each field take the lead.

andrei_says_|9 days ago

LLM technology does not have a connection with reality nor venues providing actual understanding.

Correction of conceptual errors require understanding.

Vomiting large amounts of inscrutable unmaintainable code for every change is not exactly an ideal replacement for a human.

We have not started to scratch the surface of the technical debt created by these systems at lightning speed.

wiseowise|9 days ago

> We have not started to scratch the surface of the technical debt created by these systems at lightning speed.

Bold of you to assume anyone cares about it. Or that it’ll somehow guarantee your job security. They’ll just throw more LLMs on it.

falcor84|10 days ago

> AI will leverage me

I think I know what you mean, and I do recall once seeing "this experience will leverage me" as indicating that something will be good for a person, but my first thought when seeing "x will leverage y" is that x will step on top of y to get to their goal, which does seem apt here.

veunes|9 days ago

Communication overhead between humans is real, but it's not just inefficiency, it's also where a lot of the problem-finding happens. Many of the biggest failures I've seen weren't because nobody could type the code fast enough, but because nobody realized early enough that the thing being built was wrong, brittle or solving the wrong problem

wiseowise|9 days ago

> Many of the biggest failures I've seen weren't because nobody could type the code fast enough, but because nobody realized early enough that the thing being built was wrong, brittle or solving the wrong problem

Around 99% of biggest failures come from absent, shitty management prioritizing next quarter over long strategy. YMMV.

lich_king|9 days ago

> There's an undertone of self-soothing "AI will leverage me, not replace me",

Which is especially hilarious given that this article is largely or entirely LLM-generated.

Abstract_Typist|9 days ago

> it will be the users sculpting formal systems like playdoh.

People are pushing back against this phrase, but on some level it seems perfect, it should be visualized and promoted!

aydyn|9 days ago

I think Lego is a better analogy. LLMs aren't great at working on novel cutting edge problems.

zombot|9 days ago

> I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.

A human might have taste, but AI certainly doesn't.

dsego|9 days ago

It has average taste based on the code it was trained on. For example, every time I attempted to polish the UX it wanted to add a toast system, I abhor toasts as a UX pattern. But it also provided elegant backend designs I hadn't even considered.

elevatortrim|9 days ago

I’d say AI has better taste than an average human but definitely not the taste you would see in competent people around you.

MattGaiser|9 days ago

> We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.

I am surprised at how little this is discussed and how little urgency there is in fixing this if you still want teams to be as useful in the future.

Your standard agile ceremonies were always kind of silly, but it can now take more time to groom work than to do it. I can plausibly spend more time scoring and scoping work (especially trivial work) than doing the work.

georgefrowny|9 days ago

It's always been like that. Waterfall development was worse and that's why the Agilists invented Agile.

YOLOing code into a huge pile at top speed is always faster than any other workflow at first.

The thing is, a gigantic YOLO'd code pile (fake it till you make it mode) used to be an asset as well as a liability. These days, the code pile is essentially free - anyone with some AI tools can shit out MSLoCs of code now. So it's only barely an asset, but the complexity of longer term maintenance is superlinear in code volume so the liability is larger.

teaearlgraycold|9 days ago

Well of course. In the long run AI will do almost all tasks that can be done from a computer.

TacticalCoder|9 days ago

> especially in the long run, at least in software

"at least in software".

Before that happens, the world as we know it will already have changed so much.

Programmers have already automated many things, way before AI, and now they've got a new tool to automate even more thing. Sure in the end AI may automate programmers themselves: but not before oh-so-many people are out of a job.

A friend of mine is a translator: translates tolerates approximation. Translation tolerates some level of bullshittery. She gets maybe 1/10th the job she used to get and she's now in trouble. My wife now does all he r SMEs' websites all by herself, with the help of AI tools.

A friend of my wife she's a junior lawyer (another domain where bullshitting flies high) and the reason for why she was kicked out of her company: "we've replaced you with LLMs". LLMs are the ultimate bullshit producers: so it's no surprise junior lawyers are now having a hard time.

In programming a single character is the difference between a security hole or no security hole. There's a big difference between something that kinda works but is not performant and insecure and, say, Linux or Git or K8s (which AI models do run on and which AI didn't create).

The day programmers are replaced shall only come after AI shall have disrupted so many other jobs that it should be the least of our concerns.

Translators, artists (another domain where lots of approximative full-on bullshit is produced), lawyers (juniors at least) even, are having more and more problems due to half-arsed AI outputs coming after their jobs.

It's all the bullshitty jobs where bullshit that tolerates approximation is the output that are going to be replaced first. And the world is full of bullshit.

But you don't fly a 767 and you don't conceive a machine that treats brain tumors with approximations. This is not bullshit.

There shall be non-programmers with pitchforks burning datacenters or ubiquitous UBI way before AI shall have replaced programmers.

That it's an exoskeleton for people who know what they're doing rings very true: it's yet another superpower for devs.

its-kostya|9 days ago

How does a single human acquire said "good taste" for architecting?

lp4v4n|9 days ago

>In the end it will be the users sculpting formal systems like playdoh.

Yet another person who thinks that there is a silver bullet for complexity. The mythical intelligent machines that from poorly described natural language can erect flawless complex system is like the philosopher's stone of our time.

hun3|8 days ago

eke*

(yes, I'm dying on this hill)

benreesman|10 days ago

I'm rounding the corner on a ground's up reimplementation of `nix` in what is now about 34 hours of wall clock time, I have almost all of it on `wf-record`, I'll post a stream, but you can see the commit logs here: https://github.com/straylight-software/nix/tree/b7r6/correct...

Everyone has the same ability to use OpenRouter, I have a new event loop based on `io_uring` with deterministic playbook modeled on the Trinity engine, a new WASM compiler, AVX-512 implementations of all the cryptography primitives that approach theoretical maximums, a new store that will hit theoretical maximums, the first formal specification of the `nix` daemon protocol outside of an APT, and I'm upgrading those specifications to `lean4` proof-bearing codegen: https://github.com/straylight-software/cornell.

34 hours.

Why can I do this and no one else can get `ca-derivations` to work with `ssh-ng`?

achierius|9 days ago

I mean, have you tried getting `ca-derivations` to work with `ssh-ng`? That sounds like a good way to answer your own question.

benreesman|9 days ago

And it's teachable.

Here's a colleague who is nearly done with a correct reimplementation of the OpenCode client/server API: https://github.com/straylight-software/weapon-server-hs

Here's another colleague with a Git forge that will always work and handle 100x what GitHub does per infrastructure dollar while including stacked diffs and Jujitsu support as native in about 4 days: https://github.com/straylight-software/strayforge

Here's another colleague and a replacement for Terraform that is well-typed in all cases and will never partially apply an infrastructure change in about 4 days: https://github.com/straylight-software/converge

Here's the last web framework I'll ever use: https://github.com/straylight-software/hydrogen

That's all *begun in the last 96 hours.

This is why: https://github.com/straylight-software/.github/blob/main/pro...