top | item 46768054

(no title)

GolDDranks | 1 month ago

I feel like I'm taking crazy pills. The article starts with:

> you give it a simple task. You’re impressed. So you give it a large task. You’re even more impressed.

That has _never_ been the story for me. I've tried, and I've got some good pointers and hints where to go and what to try, a result of LLM's extensive if shallow reading, but in the sense of concrete problem solving or code/script writing, I'm _always_ disappointed. I've never gotten satisfactory code/script result from them without a tremendous amount of pushback, "do this part again with ...", do that, don't do that.

Maybe I'm just a crank with too many preferences. But I hardly believe so. The minimum requirement should be for the code to work. It often doesn't. Feedback helps, right. But if you've got a problem where a simple, contained feedback loop isn't that easy to build, the only source of feedback is yourself. And that's when you are exposed to the stupidity of current AI models.

discuss

order

b33j0r|1 month ago

I usually do most of the engineering and it works great for writing the code. I’ll say:

> There should be a TaskManager that stores Task objects in a sorted set, with the deadline as the sort key. There should be methods to add a task and pop the current top task. The TaskManager owns the memory when the Task is in the sorted set, and the caller to pop should own it after it is popped. To enforce this, the caller to pop must pass in an allocator and will receive a copy of the Task. The Task will be freed from the sorted set after the pop.

> The payload of the Task should be an object carrying a pointer to a context and a pointer to a function that takes this context as an argument.

> Update the tests and make sure they pass before completing. The test scenarios should relate to the use-case domain of this project, which is home automation (see the readme and nearby tests).

dietr1ch|1 month ago

I feel that with such an elaborated description you aren't too far away from writing that yourself.

If that's the input needed, then I'd rather write code and rely on smarter autocomplete, so meanwhile I write the code and think about it, I can judge whether the LLM is doing what I mean to do, or straying away from something reasonable to write and maintain.

logicprog|1 month ago

Yeah, I feel like I get really good results from AI, and this is very much how I prompt as well. It just takes care of writing the code, making sure to update everything that is touched by that code guided by linters and type-checkers, but it's always executing my architecture and algorithm, and I spend time carefully trying to understand the problem before I even begin.

gedy|1 month ago

What you’re describing makes sense, but that type of prompting is not what people are hyping

varispeed|1 month ago

This is a good start. I write prompts as if I was instructing junior developer to do stuff I need. I make it as detailed and clear as I can.

I actually don't like _writing_ code, but enjoy reading it. So sessions with LLM are very entertaining, especially when I want to push boundaries (I am not liking this, the code seems a little bit bloated. I am sure you could simplify X and Y. Also think of any alternative way that you reckon will be more performant that maybe I don't know about). Etc.

This doesn't save me time, but makes work so much more enjoyable.

apercu|1 month ago

This is similar to how I prompt, except I start with a text file and design the solution and paste it in to an LLM after I have read it a few times. Otherwise, if I type directly in to the LLM and make a mistake it tends to come back and haunt me later.

threethirtytwo|1 month ago

I think it’s usage patterns. It is you in a sense.

You can’t deny the fact that someone like Ryan dhal creator of nodejs declared that he no longer writes code is objectively contrary to your own experience. Something is different.

I think you and other deniers try one prompt and then they see the issues and stop.

Programming with AI is like tutoring a child. You teach the child, tell it where it made mistakes and you keep iterating and monitoring the child until it makes what you want. The first output is almost always not what you want. It is the feedback loop between you and the AI that cohesively creates something better than each individual aspect of the human-AI partnership.

CivBase|1 month ago

> Programming with AI is like tutoring a child. You teach the child, tell it where it made mistakes and you keep iterating and monitoring the child until it makes what you want.

Who are you people who spend so much time writing code that this is a significant productivity boost?

I'm imagining doing this with an actual child and how long it would take for me to get a real return on investment at my job. Nevermind that the limited amount of time I get to spend writing code is probably the highlight of my job and I'd be effectively replacing that with more code reviews.

GorbachevyChase|1 month ago

My personal suspicion is that the detractors value process and implementation details much more highly than results. That would not surprise me if you come from a business that is paid for its labor inputs and is focused on keeping a large team billable for as long as possible. But I think hackers and garage coders see the value of “vibing” as they are more likely to be the type of people who just want results and view all effort as margin erosion rather than the goal unto itself.

The only thing I would change about what you said is, I don’t see it as a child that needs tutoring. It feels like I’m outsourcing development to an offshore consultancy where we have no common understanding, except the literal meaning of words. I find that there are very, very many problems that are suited well enough to this arrangement.

Balinares|1 month ago

Nah, I'm with you there. I've yet to see even Opus 4.5 produce something close to production-ready -- in fact Opus seems like quite a major defect factory, given its consistent tendency toward hardcoding case by case workarounds for issues caused by its own bad design choices.

I think uncritical AI enthusiasts are just essentially making the bet that the rising mountains of tech debt they are leaving in their wake can be paid off later on with yet more AI. And you know, that might even work out. Until such a time, though, and as things currently stand, I struggle to understand how one can view raw LLM code and find it acceptable by any professional standard.

jasondigitized|1 month ago

I feel like I am taking crazy pills. I am getting code that works from Opus 4.5. It seems like people are living in two separate worlds.

ruszki|1 month ago

Working code doesn’t mean the same for everyone. My coworker just started vibe coding. Her code works… on happy paths. It absolutely doesn’t work when any kind of error happens. It’s also absolutely impossible to refactor it in any way. She thinks her code works.

The same coworker asked to update a service to Spring Boot 4. She made a blog post about. She used LLM for it. So far every point which I read was a lie, and her workarounds make, for example tests, unnecessarily less readable.

So yeah, “it works”, until it doesn’t, and when it hits you, that you need to work more in sum at the end, because there are more obscure bugs, and fixing those are more difficult because of terrible readability.

WarmWash|1 month ago

I can't help but think of my earliest days of coding, 20ish years ago, when I would post my code online looking for help on a small thing, and being told that my code is garbage and doesn't work at all even if it actually is working.

There are many ways to skin a cat, and in programming the happens-in-a-digital-space aspect removes seemingly all boundaries, leading to fractal ways to "skin a cat".

A lot of programmers have hard heads and know the right way to do something. These are the same guys who criticized every other senior dev as being a bad/weak coder long before LLMs were around.

crystal_revenge|1 month ago

Parent's profile shows that they are an experienced software engineer in multiple areas of software development.

Your own profile says you are a PM whose software skills amount to "Script kiddie at best but love hacking things together."

It seems like the "separate worlds" you are describing is the impression of reviewing the code base from a seasoned engineer vs an amateur. It shouldn't be even a little surprising that your impression of the result is that the code is much better looking than the impression of a more experienced developer.

At least in my experience, learning to quickly read a code base is one of the later skills a software engineer develops. Generally only very experienced engineers can dive into an open source code base to answer questions about how the library works and is used (typically, most engineers need documentation to aid them in this process).

I mean, I've dabbled in home plumbing quite a bit, but if AI instructed me to repair my pipes and I thought it "looked great!" but an experienced plumber's response was "ugh, this doesn't look good to me, lots of issues here" I wouldn't argue there are "two separate worlds".

zeroCalories|1 month ago

It depends heavily on the scope and type of problem. If you're putting together a standard isolated TypeScript app from scratch it can do wonders, but many large systems are spread between multiple services, use abstractions unique to the project, and are generally dealing with far stricter requirements. I couldn't depend on Claude to do some of the stuff I'd really want, like refactor the shared code between six massive files without breaking tests. The space I can still have it work productively in is still fairly limited.

GoatInGrey|1 month ago

That's a significant rub with LLMs, particularly hosted ones: the variability. Add in quantization, speculative decoding, and dynamic adjustment of temperature, nucleus sampling, attention head count, & skipped layers at runtime, and you can get wildly different behaviors with even the same prompt and context sent to the same model endpoint a couple hours apart.

That's all before you even get to all of the other quirks with LLMs.

HarHarVeryFunny|1 month ago

That is such a vague claim, that there is no contradiction.

Getting code to do exactly what, based on using and prompting Opus in what way?

Of course it works well for some things.

jjice|1 month ago

I've found that the thing that made is really click for me was having reusable rules (each agent accepts these differently) that help tell it patterns and structure you want.

I have ones that describe what kinds of functions get unit vs integration tests, how to structure them, and the general kinds of test cases to check for (they love writing way too many tests IME). It has reduced the back and forth I have with the LLM telling it to correct something.

Usually the first time it does something I don't like, I have it correct it. Once it's in a satisfactory state, I tell it to write a Cursor rule describing the situation BRIEFLY (it gets way to verbose by default) and how to structure things.

That has made writing LLM code so much more enjoyable for me.

ActorNightly|1 month ago

Its really becoming a good litmus test for how someones coding ability whether they think LLMS can do well on complex tasks.

For example, someone may ask an LLM to write a simple http web server, and it can do that fine, and they consider that complex, when in reality its really not.

threethirtytwo|1 month ago

It’s not. There are tons of great programmers, that are big names in the industry who now exclusively vibe code. Many of these names are obviously intelligent and great programmers.

This is an extremely false statement.

giancarlostoro|1 month ago

The secret sauce for me is Beads. Once Beads is setup you make the tasks and refine them and by the end each task is a very detailed prompt. I have Claude ask me clarifying questions, do research for best practices etc

Because of Beads I can have Claude do a code review for serious bugs and issues and sure enough it finds some interesting things I overlooked.

I have also seen my peers in the reverse engineering field make breakthroughs emulating runtimes that have no or limited existing runtimes, all from the ground up mind you.

I think the key is thinking of yourself as an architect / mentor for a capable and promising Junior developer.

nozzlegear|1 month ago

You're not taking crazy pills, this is my exact experience too. I've been using my wife's eCommerce shop (a headless Medusa instance, which has pretty good docs and even their own documentation LLM) as a 100% vibe-coded project using Claude Code, and it has been one comedy of errors after another. I can't tell you how many times I've had it go through the loop of Cart + Payment Collection link is broken -> Redeploy -> Webhook is broken (can't find payment collection) -> Redeploy -> Cart + Payment Collection link is broken -> Repeat. And it never seems to remember the reasons it had done something previously – despite it being plastered 8000 times across the CLAUDE.md file – so it bumbles into the same fuckups over and over again.

A complete exercise in frustration that has turned me off of all agentic code bullshit. The only reason I still have Claude Code installed is because I like the `/multi-commit` skill I made.

bofadeez|1 month ago

Yeah exactly this

dev_l1x_be|1 month ago

Well one way of solving this is to keep giving it simple tasks.

GoatInGrey|1 month ago

The other side of this coin are the non-developer stakeholders who Dunning-Kruger themselves into firm conclusions on technical subjects with LLMs. "Well I can code this up in an hour, two max. Why is it taking you ten hours?". I've (anecdotally) even had project sponsors approach me with an LLM's judgement on their working relationship with me as if it were gospel like "It said that we aren't on the same page. We need to get aligned." It gets weird.

These cases are common enough to where it's more systemic than isolated.

hmaxwell|1 month ago

Exactly 100%

I read these comments and articles and feel like I am completely disconnected from most people here. Why not use GenAI the way it actually works best: like autocomplete on steroids. You stay the architect, and you have it write code function by function. Don't show up in Claude Code or Codex asking it to "please write me GTA 6 with no mistakes or you go to jail, please."

It feels like a lot of people are using GenAI wrong.

Grimblewald|1 month ago

Chances are you're asking it for things more insteresting than some domains hello world example. Your experience has been mine as well. AI simply cant do anything other than the basics, even if you hold it's hand. So its only use-case is a junior dev for senior devs who cant afford junior devs.

SCdF|1 month ago

I am getting workable code with Claude on a 10kloc Typescript project. I ask it to make plans then execute them step by step. I have yet to try something larger, or something more obscure.

brabel|1 month ago

Most agents do that by default now.

jasondigitized|1 month ago

This. I feel like folks are living in two separate worlds. You need to narrow the aperture and take the LLm through discrete steps. Are people just saying it doesn't work because they are pointing it at 1m loc monoliths and trying to oneshot a giant epic?

feifan|1 month ago

> Feedback helps, right. But if you've got a problem where a simple, contained feedback loop isn't that easy to build, the only source of feedback is yourself. And that's when you are exposed to the stupidity of current AI models.

That's exactly the point. Modern coding agents aren't smart software engineers per se; they're very very good goal-seekers whose unit of work is code. They need automatable feedback loops.

Obscurity4340|1 month ago

It helps to write out the prompt in a seperate text editor so you can edit it and try to desribe what the input is, and what output you want as well as try to describe and catch likely or iteratively observed issues.

You try a gamut of sample inputs and observe where its going awry? Describe the error to it and see what it does

vor_|23 days ago

With that much time and effort, it seems inefficient compared to just writing the code yourself.

__grob|1 month ago

It still amazes me that so many people can see LLMs writing code as anything less than a miracle in computing...

Balinares|1 month ago

I mean, a trained dog who plays the piano is a miracle in canine education, until such a point where you assess the quality of its performance.

echohack5|1 month ago

I have found AI great in alot of scenarios but If I have a specific workflow, then the answer is specific and the ai will get it wrong 100% of the time. You have a great point here.

A trivial example is your happy path git workflow. I want:

- pull main

- make new branch in user/feature format

- Commit, always sign with my ssh key

- push

- open pr

but it always will

- not sign commits

- not pull main

- not know to rebase if changes are in flight

- make a million unnecessary commits

- not squash when making a million unnecessary commits

- have no guardrails when pushing to main (oops!)

- add too many comments

- commit message too long

- spam the pr comment with hallucinated test plans

- incorrectly attribute itself as coauthor in some gorilla marketing effort (fixable with config, but whyyyyyy -- also this isn't just annoying, it breaks compliance in alot of places and fundamentally misunderstands the whole point of authorship, which is copyright --- and AIs can't own copyright )

- not make DCO compliant commits ...

Commit spam is particularly bad for bisect bug hunting and ref performance issues at scale. Sure I can enforce Squash and Merge on my repo but why am I relying on that if the AI is so smart?

All of these things are fixed with aliases / magit / cli usage, using the thing the way we have always done it.

ikrenji|1 month ago

Is commit history that useful? I never wanted to look up anything in it that couldn't be solved with git log | grep xyz...

furyofantares|1 month ago

> why am I relying on that if the AI is so smart?

Because it's not? I use these things very extensively to great effect, and the idea that you'd think of it as "smart" is alien to me, and seems like it would hurt your ability to get much out of them.

Like, they're superhuman at breadth and speed and some other properties, but they don't make good decisions.

GolDDranks|1 month ago

Just a supplementary fact: I'm in the beneficial position, against the AI, that in a case where it's hard to provide that automatic feedback loop, I can run and test the code at my discretion, whereas the AI model can't.

Yet. Most of my criticism is not after running the code, but after _reading_ the code. It wrote code. I read it. And I am not happy with it. No even need to run it, it's shit at glance.

elevation|1 month ago

Yesterday I generated a for-home-use-only PHP app over the weekend with a popular cli LLM product. The app met all my requirements, but the generated code was mixed. It correctly used a prepared query to avoid SQL injection. But then, instead of an obvious:

    "SELECT * FROM table WHERE id=1;" 
it gave me:

    $result = $db->query("SELECT * FROM table;");
    for ($row in $result)
        if ($["id"] == 1)
            return $row;

With additional prompting I arrived at code I was comfortable deploying, but this kind of flaw cuts into the total time-savings.

ReverseCold|1 month ago

> I can run and test the code at my discretion, whereas the AI model can't.

It sounds like you know what the problem with your AI workflow is? Have you tried using an agent? (sorry somewhat snarky but… come on)

__MatrixMan__|1 month ago

You might get better code out of it if you give the AI some more restrictive handcuffs. Spin up a tester instance and have it tell the developer instance to try again until it's happy with the quality.

causalscience|1 month ago

You're not crazy, I'm also always disappointed.

My theory is that the people who are impressed are trying to build CRUD apps or something like that.

anthonypasq96|1 month ago

so 99% of all software?

t55|1 month ago

[deleted]

GolDDranks|1 month ago

I don't love these kinds of throwaway comments without any substance, but...

"It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It"

...might be my issue indeed. Trying to balance it by not being too stubborn though. I'm not doing AI just to be able to dump on them, you know.