Warranty Void If Regenerated
519 points| Stwerner | 5 days ago |nearzero.software | reply
Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project… think the fiction equivalent of all the markdown files we use for agentic development now. After that, this was about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms. Happy to answer any questions about the process too if that would be interesting to anybody.
[+] [-] donatj|5 days ago|reply
I did not realize this was AI generated while reading it until I came to the comments here... And I feel genuinely had? Like "oh wow, you got me"... I don't like this feeling.
It's certainly the longest thing (I know about) I've taken the time to read that was AI generated. The writing struck me as genuinely good, like something out of The New Yorker. I found the story really enjoyable.
I talked to AI basically all day, yet I am genuinely made uneasy by this.
[+] [-] hmokiguess|4 days ago|reply
He says he spent months on this piece and then some, I think it's safe to assume here that this was well supervised, guided, thoughtful and full of human intent despite the AI-assisted part.
In short, I think calling it "AI generated" takes all the human effort that went into these months and the ingenious creativity of OP towards crafting this piece!
Anyways, I enjoyed it. :)
[+] [-] _dwt|5 days ago|reply
Without the inferred writer, it's much less interesting to me, except as a reminder that models change and I can't rely on the old tics to spot LLM prose consistently any more.
[+] [-] jplusequalt|5 days ago|reply
One of the many things I love about art is when I encounter something that speaks to emotions I've yet to articulate into words. Few things are more tiring than being overwhelmed with emotion and lacking the ability to unpack what you're feeling.
So when I encounter art that's in conversation with these nebulous feelings, suddenly that which escaped my understanding can be given form. That formulation is like a lightning bolt of catharsis.
But I can't help but feel a piece of that catharsis is lost when I discover that it wasn't a humans hand who made the art, but a ball of linear algebra.
If I had to explain, I guess I would say that it's life affirming to know someone else out there in the world was feeling that unique blend of the human experience that I was. But now that AI is capable of generating text, images, music, etc. I can no longer tell if those emotions were shared by the author or if it was an artifact of the AI.
In this way, AI generated art seems more isolating? You can never be sure if what you're feeling is a genuine human experience or not.
[+] [-] throwaway2037|5 days ago|reply
Thinking deeper, it seems prudent that we tag submissions like this with a prefix. Example: "LLM: ". This would be similar to "Show HN: ". While we cannot control what the original sources choose to disclose, we can fill that gap ourselves.
My point: I agree with you: It is misleading that the blog post does not include a preface explaining it was written by an LLM (and ideally, the author's motivation to use an LLM). However, it is still a good blog post that has generated some thoughtful discussion on HN.
[+] [-] _carbyau_|5 days ago|reply
With stories that shared experience is between author and reader. Book clubs etc will try to extend that "shared experience" but primarily it is author <-> reader relationship.
Remove that "shared feeling with the author" and what meaning does it have?
[+] [-] BoorishBears|5 days ago|reply
I think if you left it to its own devices, some of the narrative exposition stuff that humanized it would go off the rails
[+] [-] dwd|5 days ago|reply
Personally I have an uneasiness with it and are correspondingly cautious. Often after a review and edits it loses that "smell". I kind-of felt the same about NPM and package managers for a long time before using it became obligatory (for lack of a better word).
Are we conditioned to use other people's code unthinkingly, or is it something else?
[+] [-] travisgriggs|5 days ago|reply
And yet, in ironic counterpoint, there is a different artist I follow on Spotify that does EDM-fusion-various-world-genres. And it’s very clearly prompt generated. And that doesn’t bother me.
My hypothesis is that it has to do with how we connect/resonate with the creations. If they are merely for entertainment, then we care less. But if the creation inspired an emotion/reasoning that connects us to other humans, we feel betrayed, nay, abandoned, when it comes up being synthetic.
[+] [-] Aeolun|5 days ago|reply
[+] [-] somat|5 days ago|reply
It feels great to use.
It feels terrible to have it used on you.
[+] [-] vinceguidry|4 days ago|reply
[+] [-] ralferoo|4 days ago|reply
Maybe the summary of the first section wouldn't have landed without the example but "People who would spend $50,000 on elective surgery without blinking would balk at a $200 annual wellness check. The fix was always cheaper than the failure, the prevention was always cheaper than the fix, and somehow the money always flowed toward the crisis rather than away from it." explained the problem far more succinctly than the rambling prose before it.
I did notice something else AI about it - I really liked the art style for the illustrations, and had mixed emotions as my thought process was "I'd really like to learn how to draw like this, but I guess there's no point spending my time doing that because now I could just get an AI to generate it, and I guess that's the point of the article".
[+] [-] larodi|4 days ago|reply
[+] [-] xyzal|5 days ago|reply
[+] [-] dirkc|5 days ago|reply
[+] [-] nottorp|4 days ago|reply
Very few humans have managed this. This text is at the average level of "i want to pass the message and i'm trying to write professionally".
[+] [-] pjerem|5 days ago|reply
But I deeply feel that art only matters if there is an artist. The artist wants to convey something.
What makes you uneasy (if you are like me) is that a machine deliberately created emotions in your brain. And positive emotions, at that. It’s really something I can’t stand.
[+] [-] arikrahman|5 days ago|reply
[+] [-] nicbou|4 days ago|reply
[+] [-] sodapopcan|5 days ago|reply
Of course this has always been a bit of a problem with digital art trying to mascarade as the real thing... I always think of programmed drums using real drum samples. In my adult life I found out that an album I loved as a teenager that listed a real drummer as the performer was actually 100% programmed (this was an otherwise very "organic" sounding heavy guitar album). I always had my suspicions since it was so perfect but I experienced exactly what you are describing. I also never got over it.
[+] [-] throwaway290|5 days ago|reply
and other stuff... it's not that good.
[+] [-] hmokiguess|4 days ago|reply
Call it what you want, but I think this sits better with "AI assisted" and, perhaps, really well supervised full of the human intent behind of it. Then again, labels are strange, we call algorithmic and synthesizer assisted music "electronic" music these days and we still praise musicians who take the time through endless Moog / Ableton fine-tuning sessions to find the perfect loop patterns for their craft.
I could definitely feel the connection between the human author side of this post, thank you for sharing it!
[+] [-] petcat|4 days ago|reply
There are still plenty of purists that will not consider this a "craft". But it's always been that way. The electric guitar itself was a controversial music transition. Bob Dylan was famously criticized heavily for going electric.
But that was a long time ago, and people got over it. And they will again this time.
[+] [-] wrl|4 days ago|reply
[+] [-] furyofantares|5 days ago|reply
But I was able to get through the text, it's pretty good, you did great work cleaning it up. There's just a bit more to do to my taste.
The story is good.
[+] [-] helle253|5 days ago|reply
some inconsistencies that stuck out/i found interesting:
- HWY 29 doesnt run through marshfield, its about 15 miles north.
- not a lot of people grow cabbage in central wisconsin ;)
- no corrugated sheet metal buildings like in the first image around there
- i dont think theres a county road K near Marshfield - not in Marathon county at least
fwiw i think this story is neat, but wrong about farmers and their outlooks - agriculture is probably one of the most data-driven industries out there, there are not many family farmers left (the kinds of farmers depicted in this story), it is largely industrial scale at this point.
All that said, as a fictional experiment its pretty cool!
[+] [-] nativeit|5 days ago|reply
“Yeah, I updated the silage ratios. What does that have to do with milk prices?”
“Everything.”
He showed Ethan the chain: feed tool regenerated → output format shifted → pricing tool misparsed → margins calculated wrong → prices dropped → contracts auto-negotiated at below-market rates. Five links, each one individually innocuous, collectively costing Ethan roughly $14,000.
Ethan looked ill.
--
I've re-read this a few times now, and can't work out how the interpreted price of feed going up and the interpreted margins going down results in a program setting lower prices on the resulting milk? I feel like this must have gotten reversed in the author's mind, since it's not like it's a typo, there are multiple references in the story for this cause and effect. Am I missing something?
[Edited for clarity]
[+] [-] lelanthran|4 days ago|reply
The error is that the LLM should have have said that the costs went lower, not higher.
It got the overall logic correct, but had a nonsense sentence in the middle.
[+] [-] cluckindan|5 days ago|reply
[+] [-] cello305|5 days ago|reply
The per-head vs. per-hundredweight swap is actually plausible for inflating apparent costs: a dairy cow weighs 12-15 hundredweights, so a $5/head daily feed cost misread as $5/hundredweight would balloon to $60-75/head. So "feed expenses look much higher" checks out.
But then the pricing logic goes the wrong direction. Higher perceived costs -> lower calculated margin -> the rational response is to raise prices to restore margin, or at minimum flag the squeeze. Dropping prices when you think you're losing money on every unit is only coherent if the tool is running some kind of volume/elasticity model where it reasons "margins are tight, compete on price" — which is a legitimately dangerous default for spot milk contracts.
Most likely it's just a logic inversion in the story. Either the misparse inflated costs and the tool correctly raised prices (locking in above-market rates Ethan didn't notice because he was happy), or the misparse deflated costs and the tool undercut on price thinking it had headroom. Both are realistic failure modes. The version in the story mixes the two.
Fittingly, a specification error in a story about specification errors.
[+] [-] girvo|5 days ago|reply
[+] [-] robot-wrangler|5 days ago|reply
The premise/structure/flavor of TFA is an almost pitch-perfect imitation of that kind of voice, to the point that I immediately flagged it as probably generated. I actually think a modern person would have some difficulty even in consciously mimicking it. There's an "aw shucks" yokel-thrown-into-the-future aspect to it. Plot-wise you have rural bicycle repair shop that expands operations to support nuclear reactors and that sort of thing. Substitute any of the more atomic-age stuff for AI stuff and you're mostly there. If you have some Amazing Stories from the 1920s on your shelf then you kind of know what I mean.
[+] [-] ajkjk|5 days ago|reply
* this is a good attempt at a work of art, but written in a generic style that detracts from it * nobody making genuinely good attempts at art like this would also write so generically * and if they were making it generic on purpose, they wouldn't be able to do it so flawlessly * oh, it must be AI
I guess I can discern the presence of a human artist, but only in the idea, which just means it was a good prompt.
[+] [-] unknown|5 days ago|reply
[deleted]
[+] [-] hatthew|6 days ago|reply
I'm mildly thrown off by some inconsistencies. Carol says "I’ve been under-watering that spot on purpose for thirty years," and then a paragraph down Tom's thoughts say "Carol didn’t know that she under-watered the clay spot." Carol considers a drip irrigation timer the last acceptable innovation, but then the illustration points to the greenhouse as the last acceptable illustration. Several other things as well, mostly in the illustrations.
Are these real inconsistencies or am I misunderstanding? Was this story AI-assisted (in part or all)? Is this meta-commentary?
[+] [-] saint-evan|4 days ago|reply
The images hit that sweet spot too. Just enough and few in between to support the plot without getting in the way, just enough to like visually clarify without over-explaining. It all worked together even with minor contradictions around labelling. The inconsistencies wasn't sticky enough to disrupt the plot at all.
Over the MY years I’ve seen an idea play out in movies, books, articles, short stories, that the “humanity only unites when faced with an alien intelligence”. What gets me is how people can enjoy something like this, then immediately recoil once they figure it was actually AI-assisted enough to be largely Ai generated. Does that actually diminish the substance of what they just experienced? I don’t think it does but I'm not gonna argue such a subjective stance.
Someone in the comments suggested tagging AI-assisted work with sth like an “LLM:” prefix, similar to “ShowHN:”. That feels weird to me. LLMs might not be sentient, but they’re clearly capable enough that the output should stand on its own, alongside the intent and effort of whoever’s guiding it. Pre-labeling it just bakes in bias before anyone even engages with the work. It’s not that far off from asking human authors to declare their race or nationality up front. 'cause really if nothing about my direct experience changed, why should my judgment?
In a tech-forward space like HN, I’d expect a stronger bias toward judging things on merit alone. Just read the thing. Let it speak first. I sincerely hope this isn't gonna be an 'LLM vs Humanity' thing 'cause personally, I find the idea of a different kind of intelligence extremely interesting.
[+] [-] rikschennink|5 days ago|reply
[+] [-] cortesoft|6 days ago|reply
However, I do wonder if it is a bit too hung up on the current state of the technology, and the current issues we are facing. For example, the idea that the AI coded tools won't be able to handle (or even detect) that upstream data has changed format or methodology. Why wouldn't this be something that AI just learns to deal with? There us nothing inherent in the problem that is impossible for a computer to handle. There is no reason to think AIs can't learn how to code defensively for this sort of thing. Even if it is something that requires active monitoring and remediation, surely even today's AIs could be programmed to monitor for these sorts of changes, and have them modify existing code when to match the change when they occur. In the future, this will likely be even easier.
The same thing is true with the 'orchestration' job. People already have begun to solve this issue, with the idea of a 'supervisor' agent that is designing the overall system, and delegating tasks to the sub-systems. The supervisor agent can create and enforce the contracts between the various sub-systems. There is no reason to think this wont get even better.
We are SO early in this AI journey that I don't think we can yet fully understand what is simply impossible for an AI to ever accomplish and what we just haven't figure out yet.
[+] [-] dawdler-purge|4 days ago|reply
But honestly, the ideas here are really good. The cascading failure from a weather model update, the spaghetti problem with forty tools nobody designed as a system, the $4 toggle switch being the most important tool --- that's sharper thinking about AI than most serious essays on the topic.
A lot of people who publish regularly can't write to this level of thinking. The prose could be cleaner, sure, but it made me think, which is more than most stories do.
[+] [-] deskamess|4 days ago|reply
[+] [-] jerf|4 days ago|reply
This story is itself the explanation of why we're not going to go this route at scale. It'll happen in isolated places for the indefinite future. But farmers are going to buy systems, generated by AIs or not, that have been field tested, and will be no more interested in calling new untested code into being for their own personal use on their own personal farm than they are today.
The limiting factor for future code won't be how much AI firepower someone has to bring to bear on a problem but how much "real world" there is to test the code against, because there is only going to be so much "real world" to go around.
(Expanded on: https://jerf.org/iri/post/2026/what_value_code_in_ai_era/ ).
[+] [-] PaulHoule|4 days ago|reply
That is, a lot of "broken software" has always been rooted in "an inadequate/incorrect specification" If problems in the spec are discovered up front they are cheap to fix, the further along you go in development or deployment, the more expensive they are to fix. AI doesn't change that. Like maybe with AI it is 20% faster to fix [1] across the board but it is still more expensive to fix things late -- you might think you are done with waterfall but waterfall is not done with you!
[1] My 20% is pessimistic but if you think you are 10x as productive with AI at putting functionality in front of customers in the long term with a universal scope I believe you've got the same misunderstanding about product life cycle that I'm talking about
[+] [-] ninalanyon|4 days ago|reply
"The tool had changed. The domain had not. People who understood the domain and could also diagnose specification problems were the most valuable people in any industry, and most of them, like Tom, had arrived at the job sideways from something else."
People my age and older arrived in the software business sideways too; in my case from physics and electronics. My background in physics was a great help to me later when programming in the domain of electrical machines because I could speak both languages so to say.
Much grander people than me came into software sideways as I was reminded when reading Bertrand Meyer's in memoriam of Tony Hoare; Tony Hoare's first degree was classics at Oxford.
So perhaps we aren't entering a new phase, merely returning to our roots with new tools.
[+] [-] Sky_Knight|4 days ago|reply
[+] [-] tengwar2|6 days ago|reply
I don't know if this is what the future will look like, but this looks realistic. And if my non-existent grandson starts re-coding my business without asking, he's going to spend the next six months using K&R C.
[+] [-] andai|6 days ago|reply
Edit: got it right!
https://news.ycombinator.com/item?id=47419681
[+] [-] Stwerner|6 days ago|reply
[+] [-] rswail|5 days ago|reply
Does that make the OP an "authoring mechanic"? Or an "AI editor"?
Douglas Adams had it right, the problem is not that the answer was useless, it was that people didn't know what the right question was.
[+] [-] samman|4 days ago|reply