top | item 46065008

(no title)

quirino | 3 months ago

> As a bonus, we look forward to fewer violations (exhibit A, B, C) of our strict no LLM / no AI policy,

Hilarious how the offender on "exhibit A" [1] is the same one from the other post that made the frontpage a couple of days ago [2].

[1] https://github.com/ziglang/zig/issues/25974

[2] https://news.ycombinator.com/item?id=46039274

discuss

order

dreamcompiler|3 months ago

My old rule about the difference between coding and software engineering:

  For coding, "it seems to work for me" is good enough. For software engineering, it's not.
My new rule:

  For coding, you can use AI to write your code. For software engineering, you can't.

bhouston|3 months ago

> For coding, you can use AI to write your code. For software engineering, you can't.

You can 100% use AI for software engineering. Just not by itself, you need to currently be quite engaged in the process to check it and redirect it.

But AI lowers the barrier to writing code and thus it brings people will less rigour to the field and they can do a lot of damage. But it isn't significantly different than programming languages made coding more accessible than assembly language - and I am sure that this also allowed more people to cause damage.

You can use any tools you want, but you have to be rigorous about it no matter the tool.

networked|3 months ago

> For coding, you can use AI to write your code. For software engineering, you can't.

This is a pretty common sentiment. I think it equates using AI with vibe-coding, having AI write code without human review. I'd suggest amending your rule to this:

> For coding, you can use AI. For software engineering, you can't.

You can use AI in a process compatible with software engineering. Prompt it carefully to generate a draft, then have a human review and rework it as needed before committing. If the AI-written code is poorly architected or redundant, the human can use the same AI to refactor and shape it.

Now, you can say this negates the productivity gains. It will necessarily negate some. My point is that the result is comparable to human-written software (such as it is).

epolanski|3 months ago

I absolutely don't care about how people generate code, but they are responsible for every single line they push for review or merge.

That's my policy in each of my clients and it works fine, if AI makes something simpler/faster, good for the author, but there's 0, none, excuses for pushing slop or code you haven't reviewed and tested yourself thoroughly.

If somebody thinks they can offset not just authoring or editing code, but also taking the responsibility for it and the impact it has on the whole codebase and the underlying business problem they should be jobless ASAP as they are de facto delegating the entirety of their job to a machine, they are not only providing 0 value, but negative value in fact.

gherkinnn|2 months ago

I disagree with the new rule. The old one is fine and applies to LLMs.

Vibing and good enough is a terrible combination, as unknown elements of the system grow at a faster rate than ever.

Using LLMs while understanding every change and retaining a mental model of the system is fine.

Granted, I see vibe+ignorance way too often as it is the short-term path of least resistance in the current climate of RTO and bums in seats and grind and ever more features.

rootlocus|3 months ago

I feel like the distinction is equivalent to

    LLMs can make mistakes. Humans can't.
Humans can and do make mistakes all the time. LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.

I think think the underlying problem people have is they don't trust themselves to review code written by others as much as they trust themselves to implement the code from scratch. Realistically, a very small subset of developers do actual "engineering" to the level of NASA / aerospace. Most of us just have inflated egos.

I see no problem modelling the problem, defining the components, interfaces, APIs, data structures, algorithms and letting the LLM fill in the implementation and the testing. Well designed interfaces are easy to test anyway and you can tell at a glance if it covered the important cases. It can make mistakes, but so would I. I may overlook something when reviewing, but the same thing often happens when people work together. Personally I'd rather do architecture and review at a significantly improved speed than gloat I handcrafted each loop and branch as if that somehow makes the result safer or faster (exceptions apply, ymmv).

brodo|3 months ago

Check out this dude: https://github.com/GhostKellz?tab=repositories

He's got like 50 repos with vibe-coded, non-working Zig and Rust projects. And he clearly manages to confuse people with it:

https://github.com/GhostKellz/zquic/issues/2

stocksinsmocks|3 months ago

I don’t think this is uncommon. At one point Lemmy was a project with thousands of stars and literally no working code until finally someone other than the owner adopted it and merged in a usable product.

PNewling|3 months ago

Wow, and if you go to their website listed in they're profile, not only do almost none of the links work, the one that did just linked out to the generic template that it was straight copied from. Wow.

forgotpwd16|3 months ago

It is questionable if they've even tried any one of them.

port11|3 months ago

Hustle hustle. I'm not disgusted by this person, but by the system that promotes or requires such behaviour.

mikelitoris|3 months ago

oh god... he has a humongous AI generated PR for julia too https://github.com/tshort/StaticCompiler.jl/pull/180

conartist6|3 months ago

Maybe this guy is it: the actual worst coder in the world

zipy124|3 months ago

I guess we now have the equivalent of cowboy builders but for software now. Except no one asked for anything to be built in this case lol.

ljm|3 months ago

The people of Jonestown collectively drank less kool-aid than all this.

I don't know whether to be worried or impressed.

joelreymont|3 months ago

I had $1000 in Claude credits and went to town.

Yes, I made mistakes along the way.

carlmr|3 months ago

I'm not sure if this is advanced trolling at this point.

amoss|3 months ago

This is redefining the cutting edge of trolling.

Sammi|3 months ago

I'll one up you: at this point I'm becoming pretty sure that this is a person who actually hates LLMs, who is trying to poison the well by trying to give other people reasons to hate LLMs too.

cyanydeez|3 months ago

Is the. AI bubble just biolliinaires larping about their favorite dystopuan scifi?

rdtsc|3 months ago

My favorite of his https://x.com/joelreymont/status/1990981118783352952

> Claude discovered a bug in the Zig compiler and is in the process of fixing it!

...a few minutes later...

https://github.com/ziglang/zig/pull/25974

I can see a future job interview scenario:

- "What would you say is your biggest professional accomplishment, Joel?"

- "Well, I almost single-highhandedly drove Zig away from Github"

ivanjermakov|3 months ago

> Well, I almost single-highhandedly drove Zig away from Github

If you think about it, Joel is net positive to Zig and its community!

ljm|3 months ago

Those overly enthusiastic responses from the LLM are really going to do a number on people's egos.

aeve890|3 months ago

>MAJOR BREAKTHROUGH ACHIEVED

the bootlicking behavior must must be like crack for wannabes. jfc

>I did not write a single line of code but carefully shepherded AI over the course of several days and kept it on the straight and narrow.

>AI: I need to keep track of variables moving across registers. This is too hard, let’s go shopping… Me: Hey, don’t any no shortcuts!

>My work was just directing, shaping, cajoling and reviewing.

How people can say that without the slightest bit of reflection on whether they're right or just spitting BS

thorn|3 months ago

Ah. I remember that guy. Joel. He sold his poker server and bragged around HN long time ago. He is too much of PR stunt guy recently. Unfortunately AI does not lead to people being nice in the end. The way people abuse other people using AI is crazy. Kudos to ocaml owners giving him a proper f-off but polite response.

jeffbee|3 months ago

I agree that's a funny coincidence. But, what about the change it wanted to commit? It is at least slightly interesting. It is doubly interesting that changing line 638 neither breaks nor fixes any test.

quirino|3 months ago

There's a tweet with a Claude screenshot with a bit more context (linked on the PR).

I don't know enough about the project to know if it makes any sense, but the Zig contributor seemed confused (at least about the title).

wavemode|3 months ago

Perhaps the offset is always zero anyway in that scenario

But yeah hard to say

joelreymont|3 months ago

That one was poorly documented and may have been related to an issue in my code.

I would offer this one instead.

https://github.com/joelreymont/zig/pull/1

noname120|3 months ago

Can you stop wasting everyone’s time?

Levitating|3 months ago

Just reading his blogposts, gross. He not only thinks he is actively contributing, he thinks he deserves credit.

debugnik|3 months ago

Even after the public call-outs you keep dropping blatant ads for your blog and AI in general in your PRs; there's no other word for them than ads. This is why I blocked you on the OCaml forum already.

When I was a kid, every year I'd get so obsessed about Christmas toys that the hype would fill my thoughts to the point I'd feel dizzy and throw up. I genuinely think you're going through the adult version of that: your guts might be ok but your mind is so filled with hype that you're losing self-awareness.

rs186|3 months ago

I wonder why the maintainers haven't banned this dude yet.

Eldt|3 months ago

Stop the catfishes, please