top | item 47077122

(no title)

aeturnum | 11 days ago

I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.

discuss

order

UltraSane|11 days ago

It actually makes a lot more sense to share the LLM prompt you used than the output because it is less data in most cases and you can try the same prompt in other LLMs.

uean|11 days ago

> I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up.

Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.

rkomorn|11 days ago

It's bad enough they didn't bother to actually write it, but often it seems like they also didn't bother to read it either.

overtone1000|11 days ago

Honest conversation in the AI era is just sending your prompts straight to each other.

Uehreka|11 days ago

> Writing and programming are both a form of working at a problem through text…

Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).

I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.

fhd2|11 days ago

Users typically don't read code, developers (of the software) do.

If it's not worth reading something where the writer didn't take the time to write it, by extension that means nobody read the code.

Which means nobody understands it, beyond the external behaviour they've tested.

I'd have some issues with using such software, at least where reliability matters. Blackbox testing only gets you so far.

But I guess as opposed to other types of writing, developers _do_ read generated code. At least as soon as something goes wrong.

aeturnum|11 days ago

> even if no one reads it

I gotta disagree with you there! Code that isn't read doesn't do anything. Code must be read to be compiled, it must be read to be interpreted, etc.

I think this points to a difference in our understanding of "read" means, perhaps? To expand my pithy "not gonna read if you didn't write" bit: The idea that code stands on its own is a lie. The world changes around code and code must be changed to keep up with the world. Every "program" (is the git I run the same as the git you run?) is a living document that people maintain as need be. So when we extend the "not read / didn't write" it's not using the program (which I guess is like taking the lessons from a book) it's maintaining the program.

So I think it's possible that I could derive benefit from someone else reading an llm's text output (they get an idea) - but what we are trying to talk about is the work of maintaining a text.

1shooner|11 days ago

>Code has a pretty important property that ordinary prose doesn’t have

But isn't this the distinction that language models are collapsing? There are 'prose' prompt collections that certainly make (programmatic) things happen, just as there is significant concern about the effect of LLM-generated prose on social media, influence campaigns, etc.

JohnMakin|11 days ago

Sometimes (or often) things with horrible security flaws "work" but not in the way that they should and are exposing you to risk.

nicbou|11 days ago

It makes sense. A vibe-coded tool can sometimes do the job, just like some cheap Chinese-made widget. Not every task requires hand-crafted professional grade tools.

For example, I have a few letter generators on my website. The letters are often verified by a lawyer, but the generator could totally be vibe-coded. It's basically an HTML form that fills in the blanks in the template. Other tools are basically "take input, run calculation, show output". If I can plug in a well-tested calculation, AI could easily build the rest of the tool. I have been staunchly against using AI in my line of work, but this is an acceptable use of it.

onel|11 days ago

I am a software engineer, but I think code pales in comparison to the impact of every book, speech and essay had on our society throughout history.

arscan|11 days ago

> But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do).

It works, sure, but is it worth your time to use? I think a common blind spot for software engineers is understanding how hard it is to get people to use software they aren’t effectively forced to use (through work or in order to gain access to something or ‘network effects’ or whatever).

Most people’s time and attention is precious, their habits are ingrained, and they are fundamentally pretty lazy.

And people that don’t fall into the ‘most people’ I just described, probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need. UNLESS it’s something very novel that came from a bit of innovation that LLMs are incapable of. But that bit isn’t what we are talking about here, I don’t think.

pixl97|11 days ago

Hell, I'd read an instruction manual that AI wrote as long as it accurately describes.

I see a lot of these discussions where a person gets feelings/feels mad about something and suddenly a lot of black and white thinking starts happening. I guess that's just part of being human.

zahlman|11 days ago

> it can make real things happen even if no one reads it (it’s executable).

"One" is the operative word here, supposing this includes only humans and excludes AI agents. When code is executed, it does get read (by the computer). Making that happen is a conscious choice on the part of a human operator.

The same kind of conscious choice can feed writing to an LLM to see what it does in response. That is much the same kind of "execution", just non-deterministic (and, when given any tools beyond standard input and standard output, potentially dangerous in all the same ways, but worse because of the nondeterminism).

panny|11 days ago

>but many tools I use daily are closed source

I wonder if this is a major differentiator between AI fans and detractors. I dislike and actively avoid anything closed source. I fully agree with the premise of the submission as well.

ethmarks|11 days ago

I guess it depends on whether you're only executing the code or if you're submitting it for humans to review. If your use case is so low-stakes that a review isn't required, then vibe coding is much more defensible. But if code quality matters even slightly, such that you need to review the code, then you run into the same problems that you do with AI-generated prose: nobody wants to read what you couldn't be bothered to write.

jpfromlondon|11 days ago

>Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).

When your boss (assuming you have one) tells you to do something, do you just ignore it?

NuclearPM|11 days ago

AI is excellent for helping to write things like tech specs, procedures and manuals.

rubslopes|11 days ago

I agree with your sentiment, and it touches on one of the reasons I left academia for IT. Scientific research is preoccupied with finding the truth, which is beautiful but very stressful. If you're a perfectionist, you're always questioning yourself: "Did I actually find something meaningful, or is it just noise? Did I gaslight myself into thinking I was just exploring the data when I was actually p-hacking the results?" This took a real toll on my mental health.

Although I love science, I'm much happier building programs. "Does the program do what the client expects with reasonable performance and safety? Yes? Ship it."

exit|11 days ago

similarly, i think that something that someone took the time to proof-read/verify can be of value, even if they did not directly write it.

this is the literary equivalent of compiling and running the code.

morgoths_bane|11 days ago

> I only care if it works

Okay but it is probably not going to be a tool that will be reliable or work as expected for too long depending on how complex it is, how easily it can be understood, and how it can handle updates to libraries, etc. that it is using.

Also, what is our trust with this “tool”? E.g. this is to be used in a brain surgery that you’ll undergo, would you still be fine with using something generated by AI?

Earlier you couldn’t even read something it generated, but we’ll trust a “tool” it created because we believe it works? Why do we believe it will work? Because a computer created it? That’s our own bias towards computing that we assume that it is impartial but this is a probabilistic model trained on data that is just as biased as we are.

I cannot imagine that you have not witnessed these models creating false information that you were able to identify. Understanding their failure on basic understandings, how then could we trust it with engineering tasks? Just because “it works”? What does that mean and how can we be certain? QA perhaps but ask any engineer here if companies are giving a single shit about QA while they’re making them shove out so much slop, and the answer is going to be disappointing.

I don’t think we should trust these things even if we’re not developers. There isn’t anyone to hold accountable if (and when) things go wrong with their outputs.

All I have seen AI be extremely good at is deceiving people, and that is my true concern with generative technologies. Then I must ask, if we know that its only effective use case is deception, why then should I trust ANY tool it created?

Maybe the stakes are quite low, maybe it is just a video player that you use to watch your Sword and Sandal flicks. Ok sure, but maybe someone uses that same video player for an exoscope and the data it is presenting to your neurosurgeon is incorrect causing them to perform an action they otherwise would have not done if provided with the correct information.

We should not be so laissez-faire with this technology.

madcaptenor|11 days ago

The short version of "I am not interested in reading something that you could not be bothered to actually write" is "ai;dr"

techblueberry|11 days ago

What's interesting is how AI makes this problem worse but not actually "different", especially if you want to go deep on something. Like listicles were always plentiful, even before AI, but inferior to someone in substack going deep on a topic. AI generated music will be the same way, there's always been an excessive abundance of crap music, and now we'll just have more more of it. The weird thing is how it will hit the uncanny valley. Potentially "Better" than the crap that came before it, but significantly worse than what someone who cares will produce.

DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.

TheOtherHobbes|11 days ago

In other words, AI partially solves the technique problem, but not the taste problem.

In the arts the differentiators have always been technical skill, technical inventiveness, original imagination, and taste - the indefinable factor that makes one creative work more resonant than another.

AI automates some of those, often to a better-than-median extent. But so far taste remains elusive. It's the opposite of the "Throw everything in a bucket and fish out some interesting interpolation of it by poking around with some approximate sense of direction until you find something you like" that defines how LLMs work.

The definition of slop is poor taste. By that definition a lot of human work is also slop.

But that also means that in spite of the technical crudity, it's possible to produce interesting AI work if you have taste and a cultivated aesthetic, and aren't just telling the machine "make me something interesting based on this description."

furyofantares|11 days ago

> "I am not interested in reading something that you could not be bothered to actually write"

At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full.

enobrev|11 days ago

I feel like dealing with robo-calls for the past couple years had led me to this conclusion a bit before this boom in ai-generated text. When I answer my phone, if I hear a recording or a bot of some sorts, I hang up immediately with the thought "if it were important, a human would have called". I've adjusted this slightly for my kid's school's automated notifications, but otherwise, I don't have the time to listen to robots.

hananova|11 days ago

Robocalls nowadays tend to wait for you to break dead air before they start playing the recording (I don't know why.) So I've recently started not speaking immediately when someone calls me, and if after 10 seconds the counterparty hasn't said something I hang up.

bandrami|11 days ago

I keep hearing the "boilerplate" argument but it never made sense to me. Text editors have had snippets for half a century now and they're strictly better than an engine that generates plausible-enough boilerplate.

drcxd|11 days ago

Yeah, and if there really is any boilerplate thing, can't we programmers come up with a more deterministic solution, like a framework? I don't know.

theK|10 days ago

> AI is great at allowing you not to write the dumb boiler plate we all could crank...

I've actually started having a different view on this. After getting over the "glancing instead of reading llm suggestions" phase I started noticing that even for simple or boilerplate tasks, LLMs all too often produce quite wasteful results regardless the setting or your subscription. They are OK to get you going but in the last weeks I haven't accepted one Claude, devstral or gpt suggestion verbatim. Nevertheless, I often throw them boilerplate tasks even though I now know that typically I'll end up coding 6 out of 10 myself and only use the other four as skeletons. But just seeing the "naive" or "generic" implementation and deciding I don't like it is a plus as it seems to compress the time of thinking about it by a good part.

CuriouslyC|11 days ago

The truth is now that nobody will bother to read anything you write AI or not mostly, creating things is like buying a lottery ticket in terms of audience. Creating something lovingly by hand and pouring countless hours into it is like a golden lottery ticket that has 20x odds, but if it took 50x longer to produce, you're getting significantly outperformed by people who just spam B+ content.

doomslayer999|11 days ago

Exactly, I think perplexity had the right idea of where to go with AI (though obviously fumbled execution). Essentially creating more advanced primitives for information search and retrieval. So it can be great at things we have stored and need to perform second order operations on (writing boilerplate, summarizing text, retrieving information).

mrfumier|11 days ago

> because it is not innovative itself.

And what are you basing that claim on? What are your sources? Your arguments?

giancarlostoro|11 days ago

Except its not. What's a programmer without a vision? Code needs vision. The model is taking your vision. With writing a blog post, comment or even book, I agree.

benreesman|11 days ago

It's not worth my time to read something that was not done to a high standard, where that standard has a definition with some basis in rigor rather than opinion, where even the notion of good taste is in some way attached to experience of the distribution from which examples of good and bad taste are drawn.

It is not about the author and it is in not about the effort. It is about the quality.