I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.
It actually makes a lot more sense to share the LLM prompt you used than the output because it is less data in most cases and you can try the same prompt in other LLMs.
> I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up.
Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.
> Writing and programming are both a form of working at a problem through text…
Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).
I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.
What's interesting is how AI makes this problem worse but not actually "different", especially if you want to go deep on something. Like listicles were always plentiful, even before AI, but inferior to someone in substack going deep on a topic. AI generated music will be the same way, there's always been an excessive abundance of crap music, and now we'll just have more more of it. The weird thing is how it will hit the uncanny valley. Potentially "Better" than the crap that came before it, but significantly worse than what someone who cares will produce.
DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.
> "I am not interested in reading something that you could not be bothered to actually write"
At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full.
I feel like dealing with robo-calls for the past couple years had led me to this conclusion a bit before this boom in ai-generated text. When I answer my phone, if I hear a recording or a bot of some sorts, I hang up immediately with the thought "if it were important, a human would have called". I've adjusted this slightly for my kid's school's automated notifications, but otherwise, I don't have the time to listen to robots.
I keep hearing the "boilerplate" argument but it never made sense to me. Text editors have had snippets for half a century now and they're strictly better than an engine that generates plausible-enough boilerplate.
The truth is now that nobody will bother to read anything you write AI or not mostly, creating things is like buying a lottery ticket in terms of audience. Creating something lovingly by hand and pouring countless hours into it is like a golden lottery ticket that has 20x odds, but if it took 50x longer to produce, you're getting significantly outperformed by people who just spam B+ content.
> AI is great at allowing you not to write the dumb boiler plate we all could crank...
I've actually started having a different view on this. After getting over the "glancing instead of reading llm suggestions" phase I started noticing that even for simple or boilerplate tasks, LLMs all too often produce quite wasteful results regardless the setting or your subscription. They are OK to get you going but in the last weeks I haven't accepted one Claude, devstral or gpt suggestion verbatim. Nevertheless, I often throw them boilerplate tasks even though I now know that typically I'll end up coding 6 out of 10 myself and only use the other four as skeletons. But just seeing the "naive" or "generic" implementation and deciding I don't like it is a plus as it seems to compress the time of thinking about it by a good part.
Exactly, I think perplexity had the right idea of where to go with AI (though obviously fumbled execution). Essentially creating more advanced primitives for information search and retrieval. So it can be great at things we have stored and need to perform second order operations on (writing boilerplate, summarizing text, retrieving information).
Except its not. What's a programmer without a vision? Code needs vision. The model is taking your vision. With writing a blog post, comment or even book, I agree.
It's not worth my time to read something that was not done to a high standard, where that standard has a definition with some basis in rigor rather than opinion, where even the notion of good taste is in some way attached to experience of the distribution from which examples of good and bad taste are drawn.
It is not about the author and it is in not about the effort. It is about the quality.
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
From what I can tell, domain experts mostly don't directly respond like that. They just make separate meta-level commentaries about Show HN getting flooded. Most submissions get little or no response.
>and the flurry of domain experts responding like "wtf, no one needs this"
People have been saying this about Show HNs for time eternal. There have been an insane number of poorly thought out, poorly considered, often Get-Rich-Quick type of creations, long before AI. Things where the submitter clearly doesn't understand the industry they're targeting, doesn't provide any sort of solution, etc. Really strange if people actually think this is a new phenomenon.
Its subject is "Everything was Already AI", the point being that everyone is quantizing and simplifying and reflecting everyone else and the consensus, in such a fashion that people acting like AI ruined everything...yeah, it was already ruined. We already have furry artists drawing furry art just like countless other furry artists, declaring it an outrage that someone used AI to draw furry art, and so on. As the video covers, the whole idea of genres is basically people just cloning each other.
Be right back, going to put on a cowboy hat and denim and sing in a drawl about pickups and exes.
I’ve been partaking in my fair share, but more and more I’m just feeling sad for my fellow coders ‘cause a lot of what I’m hearing is about bad local choices and burdensome tech stacks.
Sure, it’s kinda hilarious watching a bunch of fashion obsessed front-end devs discover bash, TDD, and that, like, specifications, like, can really be useful, you know, for building stuff or whatever.
But then I think about a version of me who came up a bit later, bit into some reasonable sounding orthodoxy about React or Node as my first production language and who would be having the same ‘profound’ revelations. I never would have learned better. I wouldn’t be as empowered from having these system programming concepts hammered into me. LLMs would be more ‘magic’, I’d extrapolate more readily…
I’ve found myself thinking a lot of thoughts tantamount to “why don’t you dummies just use Haskell, or Lisp, or OCaml, or F#, or Kotlin for that?!”, and from their PoV I’m seeing a broken ladder. A ladder that was orthodoxy and well-documented when I was coming up.
LLMs should ideally bring SICP and Knuth and emacs to the masses. Fingers crossed.
Don't you think their is an opposite of that effect too?
I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?
While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
> The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
> Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
LLM-generated documentation is great for LLMs to read so they can code better and/or more efficiently. You can write it manually, but as I've discovered over the decades, humans rarely read documentation anyway. So you'll be spending a lot of time writing good for the bots.
This issue exists in art and I want to push back a little. There has always been automation in art even at the most micro level.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.
I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.
Yeah, if anything it might make sense to do the opposite. Use LLMs to do research, ruthlessly verify everything, validate references and help you guide you in some structure, but then actually write your own words manually with your little fingers and using your brain.
I had to write a difficult paragraph that I talked through with copilot. I think it made one sentence I liked but found GPTZero caught it. I would up with 100% sentences I wrote but that I reviewed extensively with Copilot and two people.
Totally agree with this. Smart creators know that inspiration comes from doing the work, not the other way around. IE, you don't wait for inspiration and then go do the work, you start doing the work and eventually you become inspired. You rarely just "have a great idea", it comes from immersing yourself in a problem, being surrounded with constraints, and finding a way to solve it. AI completely short circuits that process. Constraints are a huge part of creativity, and removing them doesn't mean you become some unstoppable creative force, it probably just means you run out of ideas or your ideas kind of suck.
And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"
you can prompt it to stop doing that, and to behave exactly how you need it. my prompts say "no flattery, no follow up questions, PhD level discourse, concise and succinct responses, include grounding, etc"
We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.
AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
> Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that "
Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
Using AI to write your code doesn't mean you have to let your code suck, or not think about the problem domain.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
One of the down sides of Vibe-Coded-Everything, that I am seeing, is reinforcing the "just make it look good" culture.
Just create the feature that the user wants and move on. It doesn't matter if next time you need to fix a typo on that feature it will cost 10x as much as it should.
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know.
I was once told by people in the video game industry that games were usually buggy because they were short lived.
Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
We are in this transition period where we'll see a lot of these, because of the effort of creating "something impressive" is dramatically reduced. But once it stabilizes (which I think is already starting to happen, and this post is an example), and people are "trained" to recognize the real effort, even with AI help, behind creating something, the value of that final work will shine through. In the end, anything that is valuable is measured by the human effort needed to create it.
We are going to have to find new ways to correct for low-effort work.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).
Most ideas people have aren't original, but the original ideas people do have come after struggling with a lot of unoriginal ideas.
> They are original to me, and that feels like an insightful moment, and thats about it.
The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.
[+] [-] aeturnum|1 month ago|reply
[+] [-] UltraSane|1 month ago|reply
[+] [-] uean|1 month ago|reply
Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.
[+] [-] Uehreka|1 month ago|reply
Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).
I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.
[+] [-] madcaptenor|1 month ago|reply
[+] [-] techblueberry|1 month ago|reply
DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.
[+] [-] furyofantares|1 month ago|reply
At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full.
[+] [-] enobrev|1 month ago|reply
[+] [-] bandrami|1 month ago|reply
[+] [-] CuriouslyC|1 month ago|reply
[+] [-] theK|1 month ago|reply
I've actually started having a different view on this. After getting over the "glancing instead of reading llm suggestions" phase I started noticing that even for simple or boilerplate tasks, LLMs all too often produce quite wasteful results regardless the setting or your subscription. They are OK to get you going but in the last weeks I haven't accepted one Claude, devstral or gpt suggestion verbatim. Nevertheless, I often throw them boilerplate tasks even though I now know that typically I'll end up coding 6 out of 10 myself and only use the other four as skeletons. But just seeing the "naive" or "generic" implementation and deciding I don't like it is a plus as it seems to compress the time of thinking about it by a good part.
[+] [-] doomslayer999|1 month ago|reply
[+] [-] mrfumier|1 month ago|reply
And what are you basing that claim on? What are your sources? Your arguments?
[+] [-] unknown|1 month ago|reply
[deleted]
[+] [-] giancarlostoro|1 month ago|reply
[+] [-] benreesman|1 month ago|reply
It is not about the author and it is in not about the effort. It is about the quality.
[+] [-] popalchemist|1 month ago|reply
[+] [-] JohnMakin|1 month ago|reply
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
[+] [-] zahlman|1 month ago|reply
[+] [-] llm_nerd|1 month ago|reply
People have been saying this about Show HNs for time eternal. There have been an insane number of poorly thought out, poorly considered, often Get-Rich-Quick type of creations, long before AI. Things where the submitter clearly doesn't understand the industry they're targeting, doesn't provide any sort of solution, etc. Really strange if people actually think this is a new phenomenon.
Indeed, a recent video that I rather loved touches on this - https://www.youtube.com/watch?v=Km2bn0HvUwg
Its subject is "Everything was Already AI", the point being that everyone is quantizing and simplifying and reflecting everyone else and the consensus, in such a fashion that people acting like AI ruined everything...yeah, it was already ruined. We already have furry artists drawing furry art just like countless other furry artists, declaring it an outrage that someone used AI to draw furry art, and so on. As the video covers, the whole idea of genres is basically people just cloning each other.
Be right back, going to put on a cowboy hat and denim and sing in a drawl about pickups and exes.
[+] [-] bonesss|1 month ago|reply
I’ve been partaking in my fair share, but more and more I’m just feeling sad for my fellow coders ‘cause a lot of what I’m hearing is about bad local choices and burdensome tech stacks.
Sure, it’s kinda hilarious watching a bunch of fashion obsessed front-end devs discover bash, TDD, and that, like, specifications, like, can really be useful, you know, for building stuff or whatever.
But then I think about a version of me who came up a bit later, bit into some reasonable sounding orthodoxy about React or Node as my first production language and who would be having the same ‘profound’ revelations. I never would have learned better. I wouldn’t be as empowered from having these system programming concepts hammered into me. LLMs would be more ‘magic’, I’d extrapolate more readily…
I’ve found myself thinking a lot of thoughts tantamount to “why don’t you dummies just use Haskell, or Lisp, or OCaml, or F#, or Kotlin for that?!”, and from their PoV I’m seeing a broken ladder. A ladder that was orthodoxy and well-documented when I was coming up.
LLMs should ideally bring SICP and Knuth and emacs to the masses. Fingers crossed.
[+] [-] ghostbrainalpha|1 month ago|reply
I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?
[+] [-] josefresco|1 month ago|reply
[+] [-] jcalvinowens|1 month ago|reply
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
[+] [-] ryandrake|1 month ago|reply
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
[+] [-] claudiulodro|1 month ago|reply
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
[+] [-] saulpw|1 month ago|reply
[+] [-] kouru225|1 month ago|reply
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
[+] [-] serf|1 month ago|reply
Non-boring people are using AI to make things that are ... not boring.
It's a tool.
Other things we wouldn't say because they're ridiculous at face value:
"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."
An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?
[+] [-] tptacek|1 month ago|reply
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
[+] [-] lasgawe|1 month ago|reply
[+] [-] nemomarx|1 month ago|reply
[+] [-] skissane|1 month ago|reply
The LLM helps me gather/scaffold my thoughts, but then I express them in my own voice
[+] [-] embedding-shape|1 month ago|reply
[+] [-] PaulHoule|1 month ago|reply
[+] [-] quijoteuniv|1 month ago|reply
[+] [-] baal80spam|1 month ago|reply
[+] [-] overgard|1 month ago|reply
[+] [-] daxfohl|1 month ago|reply
[+] [-] tonymet|1 month ago|reply
[+] [-] TheDong|1 month ago|reply
[+] [-] taude|1 month ago|reply
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
[+] [-] discreteevent|1 month ago|reply
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
[1] https://news.ycombinator.com/item?id=47059206
[+] [-] zinodaur|1 month ago|reply
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
[+] [-] BiraIgnacio|1 month ago|reply
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
[+] [-] glitchc|1 month ago|reply
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
[+] [-] fredliu|1 month ago|reply
[+] [-] iambateman|1 month ago|reply
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
[+] [-] mym1990|1 month ago|reply
[+] [-] spijdar|1 month ago|reply
> They are original to me, and that feels like an insightful moment, and thats about it.
The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.