top | item 47431237

Warranty Void If Regenerated

519 points| Stwerner | 5 days ago |nearzero.software | reply

As an experiment I started asking Claude to explain things to me with a fiction story and it ended up being really good, so I started seeing how far I could take it and what it would take to polish it enough to share publicly.

Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project… think the fiction equivalent of all the markdown files we use for agentic development now. After that, this was about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms. Happy to answer any questions about the process too if that would be interesting to anybody.

319 comments

order
[+] donatj|5 days ago|reply
I'm trying to sort out my own emotions on this.

I did not realize this was AI generated while reading it until I came to the comments here... And I feel genuinely had? Like "oh wow, you got me"... I don't like this feeling.

It's certainly the longest thing (I know about) I've taken the time to read that was AI generated. The writing struck me as genuinely good, like something out of The New Yorker. I found the story really enjoyable.

I talked to AI basically all day, yet I am genuinely made uneasy by this.

[+] hmokiguess|4 days ago|reply
Maybe it's because I think your comment throws away a lot of relevant context from OP's submission on HN.

He says he spent months on this piece and then some, I think it's safe to assume here that this was well supervised, guided, thoughtful and full of human intent despite the AI-assisted part.

In short, I think calling it "AI generated" takes all the human effort that went into these months and the ingenious creativity of OP towards crafting this piece!

Anyways, I enjoyed it. :)

[+] _dwt|5 days ago|reply
It's a major bummer. When I first read the story (a few days ago, maybe?) I thought it was an interesting metaphor that didn't quite line up with the observed details of software development with AI. I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.

Without the inferred writer, it's much less interesting to me, except as a reminder that models change and I can't rely on the old tics to spot LLM prose consistently any more.

[+] jplusequalt|5 days ago|reply
I think its a valid emotion to feel. I genuinely resonated with the story, but when I learned it was written by Claude it kind of left me feeling ... betrayed?

One of the many things I love about art is when I encounter something that speaks to emotions I've yet to articulate into words. Few things are more tiring than being overwhelmed with emotion and lacking the ability to unpack what you're feeling.

So when I encounter art that's in conversation with these nebulous feelings, suddenly that which escaped my understanding can be given form. That formulation is like a lightning bolt of catharsis.

But I can't help but feel a piece of that catharsis is lost when I discover that it wasn't a humans hand who made the art, but a ball of linear algebra.

If I had to explain, I guess I would say that it's life affirming to know someone else out there in the world was feeling that unique blend of the human experience that I was. But now that AI is capable of generating text, images, music, etc. I can no longer tell if those emotions were shared by the author or if it was an artifact of the AI.

In this way, AI generated art seems more isolating? You can never be sure if what you're feeling is a genuine human experience or not.

[+] throwaway2037|5 days ago|reply
I also had no idea this was LLM generated. After reading your comment, I had a similar emotional reaction.

Thinking deeper, it seems prudent that we tag submissions like this with a prefix. Example: "LLM: ". This would be similar to "Show HN: ". While we cannot control what the original sources choose to disclose, we can fill that gap ourselves.

My point: I agree with you: It is misleading that the blog post does not include a preface explaining it was written by an LLM (and ideally, the author's motivation to use an LLM). However, it is still a good blog post that has generated some thoughtful discussion on HN.

[+] _carbyau_|5 days ago|reply
Humans build friendships and relationships on shared experiences. There is an element of relationship-through-experiencing-a-thing. Whether it's going for a walk together or the classic first date template of dinner and a movie. The shared experience is the thing.

With stories that shared experience is between author and reader. Book clubs etc will try to extend that "shared experience" but primarily it is author <-> reader relationship.

Remove that "shared feeling with the author" and what meaning does it have?

[+] BoorishBears|5 days ago|reply
I suspect (but don't know) that this had to be edited somewhat heavily or generated in isolated chunks: I've generated a lot of fiction with Claude and it has a chronic issue of overusing any literary device one might associate with good writing once it appears in the context window

I think if you left it to its own devices, some of the narrative exposition stuff that humanized it would go off the rails

[+] dwd|5 days ago|reply
There is an interesting dichotomy where we express an uncanny-valley revulsion to AI-generated text, art, video and music; yet we seemingly go with the AI-generated code.

Personally I have an uneasiness with it and are correspondingly cautious. Often after a review and edits it loses that "smell". I kind-of felt the same about NPM and package managers for a long time before using it became obligatory (for lack of a better word).

Are we conditioned to use other people's code unthinkingly, or is it something else?

[+] travisgriggs|5 days ago|reply
I had a similar experience a few days ago with some music on Spotify. It was an Irish Pub song, rendering some political satire that seemed pretty consistent with what I figure is a predominant Irish viewpoint. Since I holidayed in Ireland a while ago and adored the public there, I really liked it. I reveled in the fact that somewhere in Ireland, there was a band singing messages in pubs that resonated strongly with me. And then it was pointed out that it was AI. I was crushed. I went from feeling connected to some people across the pond, to feeling lonely.

And yet, in ironic counterpoint, there is a different artist I follow on Spotify that does EDM-fusion-various-world-genres. And it’s very clearly prompt generated. And that doesn’t bother me.

My hypothesis is that it has to do with how we connect/resonate with the creations. If they are merely for entertainment, then we care less. But if the creation inspired an emotion/reasoning that connects us to other humans, we feel betrayed, nay, abandoned, when it comes up being synthetic.

[+] Aeolun|5 days ago|reply
It's full of AI generated imagery. Why would it not be AI generated?
[+] somat|5 days ago|reply
The duality of generated content.

It feels great to use.

It feels terrible to have it used on you.

[+] vinceguidry|4 days ago|reply
I didn't know either, but wasn't surprised to find out. The writing was too... polished, in a way I'm starting to recognize more and more. The knowledge doesn't really impact my experience of having read it, but I'm looking forward to a day when AI agents can be trained out of the servile mentality. It directly affects everything they make.
[+] ralferoo|4 days ago|reply
Interesting. I didn't realise it was LLM generated either, but only came here after the first section to find out if it was worth reading the rest.

Maybe the summary of the first section wouldn't have landed without the example but "People who would spend $50,000 on elective surgery without blinking would balk at a $200 annual wellness check. The fix was always cheaper than the failure, the prevention was always cheaper than the fix, and somehow the money always flowed toward the crisis rather than away from it." explained the problem far more succinctly than the rambling prose before it.

I did notice something else AI about it - I really liked the art style for the illustrations, and had mixed emotions as my thought process was "I'd really like to learn how to draw like this, but I guess there's no point spending my time doing that because now I could just get an AI to generate it, and I guess that's the point of the article".

[+] larodi|4 days ago|reply
Well contrary to many, myself was not convinced and suspected the content being LLM generated from very beginning with the images and even background. Something in the writing also didn’t hit right.
[+] xyzal|5 days ago|reply
Yup. There should be a disclaimer or a "food tag". The implicit assumption in society is some human had written the text you read.
[+] dirkc|5 days ago|reply
I can't remember the exact phrasing, but I read somewhere long ago that what you read now, you become in 5 years from now. As in, right after reading something, you think and deliberate about it, but in 5 years from now that becomes part of your subconscious and you can't activity filter it.
[+] nottorp|4 days ago|reply
The thing is, if you want to convey a social/political message via fiction, you have to be a genius to make it non boring or uncanny.

Very few humans have managed this. This text is at the average level of "i want to pass the message and i'm trying to write professionally".

[+] pjerem|5 days ago|reply
I have the same issue with AI generated music : it can be quite good to say the least.

But I deeply feel that art only matters if there is an artist. The artist wants to convey something.

What makes you uneasy (if you are like me) is that a machine deliberately created emotions in your brain. And positive emotions, at that. It’s really something I can’t stand.

[+] arikrahman|5 days ago|reply
Well, FWIW, LLMs are specified to infer and fill in the blanks of books. It makes the headlines now and again that publishers put AI companies on the hook for unauthorized use, The New Yorker included.
[+] nicbou|4 days ago|reply
It's treachery, a betrayal of trust. It's the same feeling as when you get sweet-talked into overpaying for something. This time, you overpaid with your attention.
[+] sodapopcan|5 days ago|reply
Whether people know it or not, when they engage with art they are assuming a person not just made it but experienced it. I'm going to blow past the discussion of "what is art" here, but where something came from and how it was made has always mattered to me (you could draw parallels to food here if you wanted). One thing that has been on my mind a lot is a particular photograph I saw in the past few years (and I'm sure it's easy to find online): it's a POV shot taken by a person sitting atop a skyscraper with their feet dangling over the edge. There is just no way that anyone could in good faith claim that the same photo produced by "AI" could possibly have the same emotional impact as knowing someone actually went and did that. I think that for a lot of people they may not even realize that when hey see a painting or even a photo as innocuous as a tree, their mind goes to that the person who produced this went to this that place the tree was in an had an experience and chose to document that particular perspective. If they were to see a painting or drawing of something that is clearly "fantasy," they know that a person made this up in their crazy mind and experience their feelings on it (good or bad). "AI" (heavy quotes) is trying to trick us and rob of us this basic knowledge. Some see this as progress. I personally think it's fucking disgusting, but I've been wrong before.

Of course this has always been a bit of a problem with digital art trying to mascarade as the real thing... I always think of programmed drums using real drum samples. In my adult life I found out that an album I loved as a teenager that listed a real drummer as the performer was actually 100% programmed (this was an otherwise very "organic" sounding heavy guitar album). I always had my suspicions since it was so perfect but I experienced exactly what you are describing. I also never got over it.

[+] throwaway290|5 days ago|reply
> She was sitting in one of the plastic chairs, holding a cup of the adequate coffee

and other stuff... it's not that good.

[+] hmokiguess|4 days ago|reply
Folks labeling "AI generated" might be jumping the gun considering OP described his process took him the last couple months and then some for this project.

Call it what you want, but I think this sits better with "AI assisted" and, perhaps, really well supervised full of the human intent behind of it. Then again, labels are strange, we call algorithmic and synthesizer assisted music "electronic" music these days and we still praise musicians who take the time through endless Moog / Ableton fine-tuning sessions to find the perfect loop patterns for their craft.

I could definitely feel the connection between the human author side of this post, thank you for sharing it!

[+] petcat|4 days ago|reply
> we still praise musicians who take the time through endless Moog / Ableton fine-tuning sessions to find the perfect loop patterns for their craft.

There are still plenty of purists that will not consider this a "craft". But it's always been that way. The electric guitar itself was a controversial music transition. Bob Dylan was famously criticized heavily for going electric.

But that was a long time ago, and people got over it. And they will again this time.

[+] wrl|4 days ago|reply
How about "AI ghostwritten"? That's a much closer parallel, and some commercially successful musicians similarly are "ghost produced".
[+] furyofantares|5 days ago|reply
I guess I'm an expert on LLM-isms somehow, I thought they were still plentiful. They're plentiful at the start but get significantly worse near the end, so I'm guessing you spent more time polishing up the first 2/3rds or so.

But I was able to get through the text, it's pretty good, you did great work cleaning it up. There's just a bit more to do to my taste.

The story is good.

[+] helle253|5 days ago|reply
that's funny, i know where this story is set (i grew up there) - or at least, the place Claude was basing things off of

some inconsistencies that stuck out/i found interesting:

- HWY 29 doesnt run through marshfield, its about 15 miles north.

- not a lot of people grow cabbage in central wisconsin ;)

- no corrugated sheet metal buildings like in the first image around there

- i dont think theres a county road K near Marshfield - not in Marathon county at least

fwiw i think this story is neat, but wrong about farmers and their outlooks - agriculture is probably one of the most data-driven industries out there, there are not many family farmers left (the kinds of farmers depicted in this story), it is largely industrial scale at this point.

All that said, as a fictional experiment its pretty cool!

[+] nativeit|5 days ago|reply
> The milk pricing tool consumed the feed tool’s output as one of its cost inputs. The format change hadn’t broken the connection — the data still flowed — but it had caused the pricing tool to misparse one field, reading a per-head cost as a per-hundredweight cost, which made the feed expenses look much higher than they were, which made the margin calculations come out lower, which made the recommended prices drop. “You changed your feed tool,” Tom said.

“Yeah, I updated the silage ratios. What does that have to do with milk prices?”

“Everything.”

He showed Ethan the chain: feed tool regenerated → output format shifted → pricing tool misparsed → margins calculated wrong → prices dropped → contracts auto-negotiated at below-market rates. Five links, each one individually innocuous, collectively costing Ethan roughly $14,000.

Ethan looked ill.

--

I've re-read this a few times now, and can't work out how the interpreted price of feed going up and the interpreted margins going down results in a program setting lower prices on the resulting milk? I feel like this must have gotten reversed in the author's mind, since it's not like it's a typo, there are multiple references in the story for this cause and effect. Am I missing something?

[Edited for clarity]

[+] lelanthran|4 days ago|reply
It divided one of the costs for milk by 100, hence the farmer selling for less than cost of production.

The error is that the LLM should have have said that the costs went lower, not higher.

It got the overall logic correct, but had a nonsense sentence in the middle.

[+] cluckindan|5 days ago|reply
The entire story is AI slop. Tasty and enjoyable slop, but slop nonetheless.
[+] cello305|5 days ago|reply
You're not missing something — the chain is internally inconsistent as written.

The per-head vs. per-hundredweight swap is actually plausible for inflating apparent costs: a dairy cow weighs 12-15 hundredweights, so a $5/head daily feed cost misread as $5/hundredweight would balloon to $60-75/head. So "feed expenses look much higher" checks out.

But then the pricing logic goes the wrong direction. Higher perceived costs -> lower calculated margin -> the rational response is to raise prices to restore margin, or at minimum flag the squeeze. Dropping prices when you think you're losing money on every unit is only coherent if the tool is running some kind of volume/elasticity model where it reasons "margins are tight, compete on price" — which is a legitimately dangerous default for spot milk contracts.

Most likely it's just a logic inversion in the story. Either the misparse inflated costs and the tool correctly raised prices (locking in above-market rates Ethan didn't notice because he was happy), or the misparse deflated costs and the tool undercut on price thinking it had headroom. Both are realistic failure modes. The version in the story mixes the two.

Fittingly, a specification error in a story about specification errors.

[+] girvo|5 days ago|reply
I will say this is one of the few pieces of prose I've read that was AI generated that didn't immediately jump out as it (a couple of inconsistencies eventually grabbed me enough to come to the comments and see your post details which mention it - I'd clicked through from the HN homepage), so your polishing definitely worked! Quite a neat little story
[+] robot-wrangler|5 days ago|reply
I think this passes the sniff test only if you're not too familiar with this neighborhood of the training set. Not that the writing is bad but it's just derivative. I listen to stuff like "Lost Scifi" podcast almost daily for example, but there are many similar ones which are focused on reading classic stuff from the golden-age zines because it's all public domain.

The premise/structure/flavor of TFA is an almost pitch-perfect imitation of that kind of voice, to the point that I immediately flagged it as probably generated. I actually think a modern person would have some difficulty even in consciously mimicking it. There's an "aw shucks" yokel-thrown-into-the-future aspect to it. Plot-wise you have rural bicycle repair shop that expands operations to support nuclear reactors and that sort of thing. Substitute any of the more atomic-age stuff for AI stuff and you're mostly there. If you have some Amazing Stories from the 1920s on your shelf then you kind of know what I mean.

[+] ajkjk|5 days ago|reply
It was pretty obvious to me, but the train of thought was something like this:

* this is a good attempt at a work of art, but written in a generic style that detracts from it * nobody making genuinely good attempts at art like this would also write so generically * and if they were making it generic on purpose, they wouldn't be able to do it so flawlessly * oh, it must be AI

I guess I can discern the presence of a human artist, but only in the idea, which just means it was a good prompt.

[+] hatthew|6 days ago|reply
A fun read!

I'm mildly thrown off by some inconsistencies. Carol says "I’ve been under-watering that spot on purpose for thirty years," and then a paragraph down Tom's thoughts say "Carol didn’t know that she under-watered the clay spot." Carol considers a drip irrigation timer the last acceptable innovation, but then the illustration points to the greenhouse as the last acceptable illustration. Several other things as well, mostly in the illustrations.

Are these real inconsistencies or am I misunderstanding? Was this story AI-assisted (in part or all)? Is this meta-commentary?

[+] saint-evan|4 days ago|reply
I really <i>REALLY</i> enjoyed this article and the direction it took me in. I went in with zero preconceptions, just read it straight through, and only after opening the comments did I realize it was largely AI-assisted. Even then, I was very pleasantly surprised. The piece takes you by the hand and leads you through a very deliberate and directed journey. Sure, there are moments where things wobbled a bit like some explanations around specific failures get a little tangled and even contradictory, but none of that registered as “this must be AI.” I’m only noticing those things now, in hindsight, like oh, that’s what that was.

The images hit that sweet spot too. Just enough and few in between to support the plot without getting in the way, just enough to like visually clarify without over-explaining. It all worked together even with minor contradictions around labelling. The inconsistencies wasn't sticky enough to disrupt the plot at all.

Over the MY years I’ve seen an idea play out in movies, books, articles, short stories, that the “humanity only unites when faced with an alien intelligence”. What gets me is how people can enjoy something like this, then immediately recoil once they figure it was actually AI-assisted enough to be largely Ai generated. Does that actually diminish the substance of what they just experienced? I don’t think it does but I'm not gonna argue such a subjective stance.

Someone in the comments suggested tagging AI-assisted work with sth like an “LLM:” prefix, similar to “ShowHN:”. That feels weird to me. LLMs might not be sentient, but they’re clearly capable enough that the output should stand on its own, alongside the intent and effort of whoever’s guiding it. Pre-labeling it just bakes in bias before anyone even engages with the work. It’s not that far off from asking human authors to declare their race or nationality up front. 'cause really if nothing about my direct experience changed, why should my judgment?

In a tech-forward space like HN, I’d expect a stronger bias toward judging things on merit alone. Just read the thing. Let it speak first. I sincerely hope this isn't gonna be an 'LLM vs Humanity' thing 'cause personally, I find the idea of a different kind of intelligence extremely interesting.

[+] rikschennink|5 days ago|reply
When I noticed the article header image was generated with AI my interest in reading the article itself dropped to zero.
[+] cortesoft|6 days ago|reply
I do enjoy this sort of speculative fiction that imagines though future consequences of something in its early stages, like AI is right now. There are some interesting ideas in here about where the work will shift.

However, I do wonder if it is a bit too hung up on the current state of the technology, and the current issues we are facing. For example, the idea that the AI coded tools won't be able to handle (or even detect) that upstream data has changed format or methodology. Why wouldn't this be something that AI just learns to deal with? There us nothing inherent in the problem that is impossible for a computer to handle. There is no reason to think AIs can't learn how to code defensively for this sort of thing. Even if it is something that requires active monitoring and remediation, surely even today's AIs could be programmed to monitor for these sorts of changes, and have them modify existing code when to match the change when they occur. In the future, this will likely be even easier.

The same thing is true with the 'orchestration' job. People already have begun to solve this issue, with the idea of a 'supervisor' agent that is designing the overall system, and delegating tasks to the sub-systems. The supervisor agent can create and enforce the contracts between the various sub-systems. There is no reason to think this wont get even better.

We are SO early in this AI journey that I don't think we can yet fully understand what is simply impossible for an AI to ever accomplish and what we just haven't figure out yet.

[+] dawdler-purge|4 days ago|reply
The LLM-ness isn't a hard problem to fix. Break it into sections, run each through an LLM a few times to catch logic issues, use different AIs to double-check. For the writing style, if the author just read it carefully, they can definitely spot the things Claude keeps repeating, and tell it not to do that.

But honestly, the ideas here are really good. The cascading failure from a weather model update, the spaghetti problem with forty tools nobody designed as a system, the $4 toggle switch being the most important tool --- that's sharper thinking about AI than most serious essays on the topic.

A lot of people who publish regularly can't write to this level of thinking. The prose could be cleaner, sure, but it made me think, which is more than most stories do.

[+] deskamess|4 days ago|reply
I had no idea it was AI assisted (as another comment put it). However I am fine with this... I would certainly enhance my long form content like the author described. The author mentioned the use of world bible and style guides, and it shows through in the consistency and tightness of the article. And that is key... to take something AI generated (based on a prompt) and rework it systematically in an iterative human-in-the loop. The end result was a great read.
[+] jerf|4 days ago|reply
Reacting to the story itself, I've been on the same thought line but came to the opposite conclusion. Precisely because the generation of the code is unreliable, one of the metrics we will be using in the future to determine the value of the code is precisely how much it has been tested against the real world. Real-world tested code will always be more valuable than what has just been instantiated by an AI, and that extends indefinitely into the future because no AI will ever be able to completely deal with integrating with all the other AI-generated code in the world on the first try. That is, as AIs get better at generating code, we will inevitably generate more code with them, and then later code must deal with that increased amount of code. So the AIs can never "catch up" with code complexity because the problem gets worse the better they get.

This story is itself the explanation of why we're not going to go this route at scale. It'll happen in isolated places for the indefinite future. But farmers are going to buy systems, generated by AIs or not, that have been field tested, and will be no more interested in calling new untested code into being for their own personal use on their own personal farm than they are today.

The limiting factor for future code won't be how much AI firepower someone has to bring to bear on a problem but how much "real world" there is to test the code against, because there is only going to be so much "real world" to go around.

(Expanded on: https://jerf.org/iri/post/2026/what_value_code_in_ai_era/ ).

[+] PaulHoule|4 days ago|reply
'the concept of “broken software” had been replaced by the concept of “an inadequate specification,”' represents a fundamental misunderstanding which has been a source of trouble in the industry for a long time.

That is, a lot of "broken software" has always been rooted in "an inadequate/incorrect specification" If problems in the spec are discovered up front they are cheap to fix, the further along you go in development or deployment, the more expensive they are to fix. AI doesn't change that. Like maybe with AI it is 20% faster to fix [1] across the board but it is still more expensive to fix things late -- you might think you are done with waterfall but waterfall is not done with you!

[1] My 20% is pessimistic but if you think you are 10x as productive with AI at putting functionality in front of customers in the long term with a universal scope I believe you've got the same misunderstanding about product life cycle that I'm talking about

[+] ninalanyon|4 days ago|reply
This struck me:

"The tool had changed. The domain had not. People who understood the domain and could also diagnose specification problems were the most valuable people in any industry, and most of them, like Tom, had arrived at the job sideways from something else."

People my age and older arrived in the software business sideways too; in my case from physics and electronics. My background in physics was a great help to me later when programming in the domain of electrical machines because I could speak both languages so to say.

Much grander people than me came into software sideways as I was reminded when reading Bertrand Meyer's in memoriam of Tony Hoare; Tony Hoare's first degree was classics at Oxford.

So perhaps we aren't entering a new phase, merely returning to our roots with new tools.

[+] Sky_Knight|4 days ago|reply
I loved the story... It felt comforting in a way I haven't experienced in quite a while. With everyone around me stressed about becoming obsolete... I mean - thank you! It was a bit difficult to read, but it never felt generated to be honest. There is a big difference between one shot generated stories and this. People tend to forget that as much as we don't want to admit - we, as humans, are simply generators of actions, reacting on much larger context... LLMs are not yet even close to us, but are actually way ahead of some of us. When someone spent so much effort on context preparation, the least I would do is congratulate you for the effort and in the end - a very nice story.
[+] tengwar2|6 days ago|reply
There's a bit of a tradition of introducing engineering ideas through stories. I remember a novella which was used to introduce something like MRP II (https://en.wikipedia.org/wiki/Material_requirements_planning) in the 80's. One of the reasons I think it works is that it keeps a focus on the human elements - like why Tom fitted the switch in your story. I remember automating a lab system back in 1985, which would bring in £1000 per day. Two weeks later I found out that the reason it wasn't in use was that the user wanted an amber monitor rather than a green one. I fitted the switch.

I don't know if this is what the future will look like, but this looks realistic. And if my non-existent grandson starts re-coding my business without asking, he's going to spend the next six months using K&R C.

[+] rswail|5 days ago|reply
I'm very impressed that was written by an LLM.

Does that make the OP an "authoring mechanic"? Or an "AI editor"?

Douglas Adams had it right, the problem is not that the answer was useless, it was that people didn't know what the right question was.

[+] samman|4 days ago|reply
For the specific process that generated this story, I think a generous comparison could be made to something like photography. Yes, the machine is producing the resulting work, but under the guidance and curation of an artist that sets an appropriate parameters and context to the machine. I’d submit that this can result in varying levels of authorship, much like the difference between a snapshot (one-shot?) and a carefully controlled studio photograph, depending on the depth of preparation, iteration, and curation the photographer performs.